Saturday, May 10, 2025

IT


Avnet, Schneider Electric and Iceotope deliver Micro Data Centre
Avnet, Schneider Electric and Iceotope have been joined by Lenovo to deploy its Lenovo ThinkSystem SR670 servers in a highly scalable, GPU-rich, liquid-cooled micro data centre solution. Sealed at the chassis level, the new solution enables artificial intelligence (AI), machine learning (ML), and high-performance computing (HPC) workloads to be deployed in close proximity to the location of data generation and use regardless of how harsh the environment is. Avnet is providing the integration services for the solution on behalf of Schneider Electric and Iceotope to help them deploy globally to a wide range of customers. The unique capabilities and technologies Avnet brings to the partnership include: Conversion of the Lenovo ThinkSystem SR670 to liquid cooling and integration of Schneider Electric’s APC NetShelter liquid-cooled enclosure systemSoftware development to deliver Out of Band Management (OOBM) capability. Through Witekio, an Avnet company, Avnet is working with Iceotope to offer a RedFish-compliant solution that is enabled by Avnet’s MaaxBoard IoT single board computerA full portfolio of lifecycle services such as warranty support, field installation and maintenance, advance exchange, repair/refurbishment, and IT asset disposition and value recovery The groundbreaking solution is based on Iceotope’s Ku:l 2 liquid-cooled chassis which provides unique value by completely isolating the critical IT from the environment with a perfectly sealed and resilient enclosure. This enables secure and tamper-proof computing, storage and networking, providing an extra level of physical and I/O connective security in the most extreme locations and climatic conditions. With Schneider Electric’s EcoStruxure IT solution, remote monitoring and management is taken to a new level enabling proactive insights on critical assets that impact the health and availability of IT environments, so infrastructure performance is optimised and risk is minimised. “Liquid cooled servers that are cost effectively deployed at scale will revolutionise the server industry as well as computing in general by cutting energy usage, noise and space requirements,” says Scott MacDonald, President, Avnet Integrated. “As a leader in industrial solutions, Avnet Integrated is uniquely positioned to simplify the deployment of this solution for our customers in collaboration with Schneider Electric, Iceotope and Lenovo. Our ability to solve complex problems on a global scale for large customers while closely collaborating with all partners is what differentiates us as an advanced systems integrator.” David Craig, CEO Iceotope comments, “The infrastructure to provision edge expansion will be installed where space provides, outside traditional data centres – we call this the Fluid Edge. Iceotope is dedicated to ensuring the durability, reliability, efficiency and long-term viability of Fluid Edge facilities, where air cooled approaches have a limited future. Partnering with Lenovo to bring the Ku:l Micro DC to life has accelerated our capability to provide a proven and warranty-backed, chassis-level immersion cooled HPC design solution to this expanding market.” Steven Carlini, VP Innovation and Data Centre, Schneider Electric states: “This integrated, immersion cooled solution brings highly intensive and efficient edge computing capabilities that “drop into” applications from the edge to large scale HPCs. The ability to bring these ready-to-deploy liquid cooled solutions to market proves the strength of the Avnet, Schneider Electric and Iceotope partnership.”

NAKIVO launches v10.2 with support for S3 object lock
NAKIVO launches v10.2 of NAKIVO Backup & Replication. The latest release expands the scope of Microsoft 365 data protection with support for SharePoint Online, adds ransomware protection for backups with Amazon S3 Object Lock Support and introduces Tenant Resource Allocation. The new features are powerful additions to the core functionality of the award-winning solution, which continues to meet the evolving needs of its 16,000+ customer base.  SharePoint Online Backup  With the skyrocketing adoption of Microsoft 365, cybercriminals are increasingly exploiting data in the cloud for financial gain. In addition to cyber threats, Microsoft 365 data faces other data loss threats like accidental data deletion and retention policy gaps. To help companies address SaaS data protection issues, NAKIVO Backup & Replication provides comprehensive data protection and recovery for Microsoft 365 data with support for Exchange Online and OneDrive for Business. The latest 10.2 version adds another layer of protection for Microsoft 365 users with support for SharePoint Online.  SharePoint Online Backup allows companies using Microsoft 365 to:  Back up SharePoint Online sites and subsites. Recover document libraries and lists to the original or a different location. Use the search functionality to locate and recover items quickly for compliance and e-discovery purposes.  Ransomware-Proof Backups with Amazon S3 Object Lock  After introducing Backup to Amazon S3 in 2020, NAKIVO adds support for Amazon S3 Object Lock in v10.2 to help businesses mitigate the threat of ransomware to their backups and backup copies. The Amazon S3 Object Lock functionality uses the write-once-read-many (WORM) model meant to ensure that objects are immutable for as long as required. Once set, the retention period cannot be shortened or disabled, not even by the root user.  Amazon S3 Object Lock support allows companies to: Protect their backups stored in Amazon S3 from overwriting and deletion. Set retention periods to keep objects immutable for as long as you need. Protect their backup data against ransomware and meet compliance requirements.  Tenant Resource Allocation  NAKIVO Backup & Replication v10.2 offers more control and flexibility for managed service providers (MSPs) and large enterprises using the multi-tenant mode. The new Tenant Resource Allocation feature provides an effective means of allocating data protection infrastructure resources to tenants. Administrators can assign hosts, clusters, VMs, Backup Repositories and Transporters to the tenants.   This feature complements other tenant configuration options in multi-tenant deployments of NAKIVO Backup & Replication. In addition to allocating resources, administrators can implement role-based access control and grant permissions to tenants to perform specific data protection activities in the Self-Service Portal.   Feature Availability  SharePoint Online Backup is available with a Backup for Microsoft 365 subscription, which is licensed per user and comes with 24/7 Support. Data protection for SharePoint Online starts at as low as $0.75 per user/month for a three-year subscription. NAKIVO customers can combine a subscription license for Backup for Microsoft Office 365 with any NAKIVO Backup & Replication edition and license.  Ransomware-Proof Backups with Amazon S3 Object Lock is available with perpetual or subscription licenses for Enterprise, Enterprise Essentials and Enterprise Plus editions of NAKIVO Backup & Replication.  Tenant Resource Allocation is available in multi-tenant deployments of NAKIVO Backup & Replication. For MSPs, multi-tenancy is available with a monthly or annual subscription with the MSP Pro, MSP Enterprise and Enterprise Plus editions. For large enterprises, multi-tenancy is available with subscription or perpetual licenses in the Enterprise and Enterprise Plus editions. The 15-day Free Trial comes with full access to all NAKIVO Backup & Replication features, including Backup for Microsoft Office 365, Amazon S3 Object Lock Support and Tenant Resource Allocation. “We’re always trying to anticipate business needs when it comes to data protection. NAKIVO Backup & Replication v10.2 expands Backup for Microsoft 365 functionality with SharePoint Online Backup and addresses ransomware threats with the S3 Object Lock functionality,” says Bruce Talley, CEO of NAKIVO. “The move to cloud storage and cloud services continues, and we are always ready to meet evolving market trends, so our customers have the tools they need to keep their data safe wherever it resides.” 

Spectra logic announces latest version of StorCycle
Spectra Logic has announced the latest release of its award-winning StorCycle Storage Lifecycle Management software. StorCycle 3.3 comes with a range of new features including an open RESTful API that allows users to integrate StorCycle into a broader set of workflows, as well as new enhancements that advance the software’s ability to migrate data to disk, tape and cloud storage, including all tiers of Amazon and Microsoft Azure. StorCycle is a storage lifecycle management software solution that ensures data is stored on the right tier throughout its lifecycle for greater IT and budgetary efficiencies. Created for organisations that need lasting protection and access to data which is no longer active, but still critical to retain, StorCycle scans primary storage for inactive files and migrates them to a lower cost tier of storage, which includes any combination of cloud storage, object storage disk, network-attached storage (NAS) and object storage tape. Interoperable with Linux and Windows, StorCycle migrates data, without changing original formats, and allows users to have easy access to all data including data migrated to higher latency storage mediums like cloud “cold” tiers and tape. StorCycle 3.3 delivers the following new benefits: Open API With the exposed RESTful API, users can take advantage of StorCycle’s core features, including scanning, migrating, and restoring data to build integrations and applications that leverage StorCycle’s Storage Lifecycle Management capabilities. The exposed API is an excellent tool for advanced users who wish to integrate StorCycle into wider workflows. In addition to providing core commands to configure storage locations, the API helps users build applications to better manage jobs or perform bulk actions without using the web interface. Extended Cloud Support to Microsoft Azure and Amazon StorCycle 3.3 extends cloud support to Microsoft Azure, including both the standard (hot/cool) and archive tiers. Azure can be used as a storage target for migrate/store jobs, helping organisations leverage the cost-effectiveness and ease of cloud storage. This is in addition to StorCycle’s existing support for Amazon S3 standard and archive tiers. Additional support for Amazon has also been added in StorCycle 3.3 so that users can leverage StorCycle’s capability to tier data to Glacier and Deep Archive from standard tiers after a migration. Automated Restore Notification Alerts When configuring a restore job, StorCycle users now have the ability to request automatic email alerts when the restore job completes. This is especially helpful and convenient for restore jobs which may take several hours (such as with Amazon Glacier or Azure Archive). Real Time Performance and Progress Data Real time performance and progress data is now displayed when a migrate or restore job is active. Migrate/store and restore jobs will display the data rate (MB/s), the amount of data transferred thus far and the job size. Scan jobs will display the total directories and files migrated per second. Each performance chart is easily available on the respective jobs pages (migrate, restore, scan), as well as the main jobs page which will display all active jobs in StorCycle. This feature is especially useful when many large jobs are running in parallel and other organisational tasks depend on job completions. OpenLDAP In addition to Active Directory, StorCycle now provides full support for OpenLDAP for user authentication. When configured, users on the LDAP domain can be given Restore User permissions where they are able to restore migrated data without Administrator assistance. Configuring OpenLDAP also makes it easier for Administrators to add named users to StorCycle, where they can be given additional permissions in the application, such as Administrator or Storage Manager roles. StorCycle -- Award-Winning Software StorCycle won several product awards for excellence in innovation, data management and cloud enablement in 2020. The list includes: Digital Media World 2020 Gold Awards – StorCycle Storage Lifecycle Management software earned the 2020 Digital Media World Award in the Best Digital Asset Management award category. Spectra’s StorCycle was recognised for its ability to bring visibility and insight to better manage storage volumes through intelligent tiering and data migration, while maintaining direct, consistent access to migrated assets and enhancing search capabilities with the use of metadata tags. The Cloud Awards – StorCycle was a finalist in the international Cloud Computing Awards program, The Cloud Awards. StorCycle was recognised as the Best Hybrid Cloud Solution for enabling organisations to implement a more effective and efficient hybrid cloud solution which reduces data storage costs while optimising data protection. The Storries – StorCycle software was honoured with the Storage Innovation of the Year award by Storage Magazine, which recognises Spectra’s innovation in providing one of the industry’s most effective storage lifecycle management software solutions to enhance IT infrastructures. DCS Awards – StorCycle software was a finalist for the Product of the Year category by the 2020 DCS Awards, which rewards the achievements of product designers, manufacturers, suppliers and providers operating in the data centre arena in Europe.

AI and machine learning- data centres need to differentiate to survive
By Peter Ruffley, CEO, Zizo The promise of AI At present, the IT industry is doing itself no favours by promising the earth with emerging technologies, without having the ability to fully deliver them, see Hadoop’s story with big data as an example - look where that is now. There is also a growing need to dispel some of the myths surrounding the capabilities of AI and data led applications, which often sit within the c-suite, that investment will give them the equivalent of the ship’s computer from Star Trek, or the answer to the question ‘how can I grow the business?’ As part of any AI strategy, it’s imperative that businesses, from the board down, have a true understanding of the use cases of AI and where the value lies. If there is a clear business need and an outcome in mind then AI can be the right tool.  But it won’t do everything for you – the bulk of the work still has to be done somewhere, either in the machine learning or data preparation phase. AI ready vs. AI reality With IoT, many organisations are chasing the mythical concept of ‘let’s have every device under management’. But why? What’s the real benefit of doing that? All they are doing is creating an overwhelming amount of low value data. They are expecting data warehouses to store a massive amount of data. If a business keeps data from a device that shows it pinged every 30 seconds rather than a minute, then that’s just keeping data for the sake of it. There’s no strategy there. The ‘everyone store everything’ mentality needs to change. One of the main barriers to implementing AI is the challenges in the availability and preparing of data. A business cannot become data-driven, if it doesn’t understand the information it has and the concept of ‘garbage in, garbage out’ is especially true when it comes to the data used for AI. With many organisations still on the starting blocks, or having not yet entirely finished their journey to become data driven, there appears to be misplaced assumption that they can quickly and easily leap from being in the process of preparing their data to implementing AI and ML, which realistically, won’t work. To successfully step into the world of AI, businesses need to firstly ensure the data they are using is good enough. AI in the data centre Over the coming years, we are going to see a tremendous investment in large scale and High-Performance Computing (HPC) being installed within organisations to support data analytics and AI. At the same time, there will be an onus on data centre providers to be able to provide these systems without necessarily understanding the infrastructure that’s required to deliver them or the software or business output needed to get value from them. We saw this in the realm of big data, when everyone tried to swing together some kind of big data solution and it was very easy to just say we’ll use Hadoop to build this giant system. If we’re not careful, the same could happen with AI. There’s been a lot of conversations about the fact that if we were to peel back the layers of many AI solutions, we’ll find that there is still a lot of people investing a lot of hard work into them, so when it comes to automating processes, we aren’t quite in that space yet. AI solutions are currently very resource heavy. There’s no denying that the majority of data centres are now being asked how they provide AI solutions and how they can assist organisations on their AI journey. Whilst organisations might assume that data centres will have everything to do with AI tied up. Is this really the case? Yes, there is a realisation of the benefits of AI, but actually how it is best implemented, and by who, to get the right results, hasn’t been fully decided. Solutions to how to improve the performance of large-scale application systems are being created, whether that’s by getting better processes, better hardware or whether it’s reducing the cost to run them through improved cooling or heat exchange systems. But data centre providers have to be able to combine these infrastructure elements with a deeper understanding of business processes. This is something very few providers, as well as Managed Service Providers (MSPs) and Cloud Service Providers (CSPs) are currently doing. It’s great to have the kit and use submerged cooling systems and advanced power mechanisms but what does that give the customer? How can providers help customers understand what more can be done with their data systems? How do providers differentiate themselves and how can they say they harness these new technologies to do something different? It’s easy to go down the route of promoting that ‘we can save you X, Y, Z’ but it means more to be able to say ‘what we can achieve with AI is..X, Y, Z‘. Data centre providers need to move away from trying to win customers over based solely on monetary terms. Education and collaboration When it comes to AI, there has to be an understanding of what the whole strategic vision is and looking at where value can be delivered and how a return on investment (ROI) is achieved. What needs to happen is for data centre providers to work towards educating customers on what can be done to get quick wins. Additionally, sustainability is riding high on the business agenda and this is something providers need to take into consideration. How can the infrastructure needed for emerging technologies work better? Perhaps it’s with sharing data between the industry and working together to analyse it. In these cases, maybe the whole is greater than the sum of its parts. The hard bit is going to be convincing people to relinquish control of their data. Can the industry move the conversation on from being purely technical and around how much power and kilowatts are being used to how is this helping our social corporate responsibility/our green credentials? There are some fascinating innovations already happening, where lessons can be learnt. In Scandinavia for example, there are those who are building carbon neutral data centres, which are completely air cooled, with the use of sustainable power cooling through solar. The cooling also comes through the building by basically opening the windows. There are also water cool data centres out there under the ocean. Conclusion We saw a lot of organisations and data centres jump in head first with the explosion of big data and not come out with any tangible results – we could be on the road to seeing history repeat itself. If we’re not careful, AI could just become another IT bubble.  There is still time to turn things around. As we move into a world of ever-increasing data volumes, we are constantly searching for the value hidden within low value data that is being produced by IoT, smartphone apps and at the edge. As the global costs of energy rise, and the numbers of HPC clusters powering AI to drive our next generation technologies increase, new technologies have to be found that lower the cost of running the data centre, beyond standard air cooling.   It’s great to see people thinking outside of the box on this with, with submerged HPC systems and full, naturally aerated data centres, but more will have to be done (and fast) to meet up with global data growth. The appetite for AI is undoubtedly there but for it to be able to be deployed at scale and for enterprises to see real value, ROI and new business opportunities from it, data centres need to move the conversation on, work together and individually utilise AI in the best way possible or risk losing out to the competition.

Techbuyer partners Digital Access For All charity launch
Techbuyer is supporting the Digital Access For All by the charity The Learning Foundation initiative with other select ADISA members. The nationwide campaign, delivered through the ADISA Marketplace, will ensure secure the processing of thousands of corporate donations and facilitate large scale redistribution of grade A equipment for disadvantaged families across the UK. With many schools relying on at least partial delivery of learning through the internet and internet enabled devices, it is vital that these resources are available to all. However, around 1.5 million school children have either limited or no access to these at home and are seeing their education suffer as a result. Remote learning during isolation periods, lockdowns and for homework is increasingly part of the norm. Children whose only access to this is via a parent or guardian’s mobile phone are hampered in their ability to get the most out of this, either because of lower functionality on a mobile device or because there is limited facility for essay writing and developing ideas. National charity The Learning Foundation launched the new initiative of Digital Access For All this year in order to level the digital playing field for children and young people across the UK. It has also established the Digital Poverty Alliance to address digital exclusion and data poverty across the board. The latest phase towards this end is a programme to encourage businesses to donate their redundant IT to those who need it most. The charity is partnering ADSIA, listed on the National Cyber Security Centre’s guidance for the disposal of infrastructure, to deliver the programme via the ADISA Marketplace. https://youtu.be/c5gu4ggI9_Y “I am delighted to be working with ADISA and their partners on this hugely important initiative. The fact that donating companies can be assured of the highest level of integrity and standards combined with them also being able to help directly in enabling disconnected children and families to get online makes this a genuine win-win,” says Paul Finnis, CEO Learning Foundation and Digital Access for All. The ADISA Marketplace uses a limited set of approved IT Asset Disposition companies to process donations to the highest standards and generate the best return on these for the charity. The Marketplace offers the opportunity to fulfil the need for grade A PCs, laptops and monitors to organisations who apply for the charity for help.  “This is a really worthwhile project to be involved with and Techbuyer has fully supported its development from the early stages. ADISA is known throughout our industry as the gold standard for compliance and best practice so we knew that the project would be well run and deliver the most value for people who need it the most. One of the challenges with corporate donations is that they sometimes include equipment which is not in the best shape, the approached devised for this initiative works around this issue to maximise equipment that can be donated and reused. We are very proud at Techbuyer to have been asked to be a part of this” Mick Payne, Techbuyer Managing Director.  

HelpSystems Acquires FileCatalyst for Expansion of automation Portfolio
HelpSystems announced today the acquisition of FileCatalyst, a leader in enterprise file transfer acceleration. FileCatalyst enables organisations working with extremely large files to optimise and transfer their information swiftly and securely across global networks. This can be particularly beneficial in industries such as broadcast media and live sports. With the increasing need to share video and other media-rich files, big data, and extensive databases, many businesses struggle with technology and bandwidth constraints. FileCatalyst solves these challenges by enabling files to move at speeds hundreds of times faster than what FTP allows, while ensuring secure and reliable delivery and tracking. This empowers businesses to work more efficiently, without the latency and packet loss that can plague the movement of vast amounts of information when it comes to content distribution, file sharing, and offsite backups.   “Our customers and partners have expressed a growing need to move significant volumes of data more quickly than ever before, and FileCatalyst addresses this problem effectively for many well-known organisations,” says Kate Bolseth, CEO, HelpSystems. “FileCatalyst is an excellent addition to our managed file transfer and robotic process automation offerings, and we are pleased to bring the FileCatalyst team and their strong file acceleration knowledge into the global HelpSystems family.” “We are thrilled to become part of a company with deep roots and expertise in both cybersecurity and automation,” comments Chris Bailey, CEO and Co-Founder, FileCatalyst. “Our customers will find value in pairing our file transfer acceleration solutions with HelpSystems’ extensive solution suites.”

NTT expands its data centre footprint in the Berlin market with two projects
Global Data Centres, a division of NTT has announced news about the construction of a new Berlin 2 data centre campus, as well as the expansion of its Berlin 1 Data Centre. In Marienpark, the new 2 Data Center is being built 14km from NTT's existing Berlin 1 Data Center. Once fully operational, it will add a total of 48MW of IT load to NTT’s data center portfolio. The first building is scheduled to open in Q4 2021 and will offer 4,800sqm of IT space and 12 MW of IT load. The new state-of-the-art, highly secure, energy-efficient data center campus will follow NTT’s successful business model and offer colocation services for wholesale and retail, as well as hybrid IT solutions and maximum protection for NTT’s clients’ critical and sensitive IT systems. In addition, the Berlin 1 campus, located in Berlin-Spandau, which has been operational since 2007, will be complimented by a second data center building, thereby providing a further 2,500sqm of IT space and 5MW of IT load. The building is scheduled to be ready for service in Q3 2021 in response to the market demand. "We are looking forward to future growth in Germany's vibrant and diverse digital industry. Thus, the Berlin 2 campus is designed to serve all types of client needs from the local start-up looking for a pre-installed single rack, all the way up to built-to-suit solutions for major hyperscale deployments. The project launch also comes at the right time to complement the growth at our existing Berlin 1 site", comments Florian Winkler, CEO of NTT Ltd.’s Global Data Centres division in EMEA. Clients in both Berlin data centers will have access to NTT's Multi Service Interconnection Platform (MSIP), which provides NTT Group's clients and partners access to a powerful digital ecosystem of multiple carriers and cloud providers. The proximity of the locations within Berlin gives clients the opportunity to operate their infrastructure both in Berlin 1 and on the new campus in an "active-active configuration". The 2 Data Center will also feature one of our well-known Technology Experience Labs to help our clients and partners thrive in the age of continuous digitization and disruption. The expansion in the Berlin area is part of the growth strategy of the Global Data Centre division of NTT, which operates one of the largest data centre platforms in the world, comprising over 160 data centre spanning more than 20 countries and region.

Thomson Reuters taps AWS to power digital transformation
Amazon Web Services has announced that Thomson Reuters has successfully completed a large scale migration to AWS, which will enable the company to innovate faster, develop and act on new insights, and become a more agile business in the cloud. As part of its ongoing move to the cloud, Thomson Reuters migrated thousands of servers and hundreds of revenue-generating applications to AWS. Expanding its longstanding relationship with AWS, Thomson Reuters is also leveraging AWS’s unparalleled portfolio of cloud services including analytics, database, containers, serverless, storage, machine learning, and security to innovate new digital products for its customers and reveal greater insights into the industries it covers. Thomson Reuters provides highly specialized information-enabled software and tools for legal, tax, accounting, and compliance professionals, combined with Reuters, the world’s global news service. Across these sectors, its products and tools help professionals better understand their industries, streamline operations, increase efficiencies, and mitigate risk. In 2018, Thomson Reuters partnered with AWS Professional Services, AWS Managed Services (where AWS operates infrastructure on a customer's behalf), and AWS Certified third-party experts from multiple consulting partners who provided hands-on expertise, day-to-day infrastructure management, and cost optimization. The decision to use these AWS resources enabled Thomson Reuters to complete this migration project five months ahead of schedule. By leveraging AWS Managed Services, Thomson Reuters moved hundreds of mission-critical, legacy applications from across their global business units to the cloud and into production quickly. To further streamline this migration, Thomson Reuters leveraged the AWS Marketplace to access simplified software contracting services to rapidly procure and integrate their preferred third-party software into their AWS environment. Additionally, Thomson Reuters has developed an internal platform to apply machine learning at scale using Amazon SageMaker – AWS’s service for building, training, and deploying machine learning models in the cloud and on the edge – to help developers and data scientists quickly gain new insights from real-time and historical data in a fully managed and secure environment. The platform saves developers and data scientists countless hours of coding by providing all of the components used for machine learning in a single toolset so models get to production faster with much less effort and at a lower cost, enabling Thomson Reuters to deliver new, intelligent solutions for their customers. Thomson Reuters uses Amazon SageMaker to automatically shut down GPU instances when a training job is done and leverages Amazon Elastic Compute Cloud (Amazon EC2) Spot Instances – unused EC2 capacity that is available at up to a 90% discount compared to On-Demand prices – to reduce the cost of machine learning model inference. “Thomson Reuters provides the intelligence, technology, and expertise that our customers need to solve their toughest regulatory, legal, and compliance challenges,” says Justin Wright, Vice President of Architecture and Development, Thomson Reuters. “We’re leveraging AWS’s comprehensive set of cloud services to develop insightful new products and services that will help our customers reinvent the way they work and operate effectively in complex arenas. AWS is a trusted resource for us – especially AWS Managed Services and AWS Professional Services – providing the expertise to accelerate our move to cloud and helped migrate data centers ahead of schedule.” “Thomson Reuters is using the cloud to put accurate data and information into the hands of professionals across the legal, tax and accounting, and news industries to make important and timely decisions,” comments Greg Pearson, Vice President, Worldwide Commercial Sales at Amazon Web Services, Inc. “By leveraging AWS’s depth and breadth of services with the expertise of AWS Managed Services, Thomson Reuters is able to eliminate the heavy lifting of managing infrastructure operations and focus on innovating new ways to deliver in-depth information and digital solutions to deliver new and timely insights to customers around the world.”

Tech firm strengthens asset recovery links
A leading Internet of Things (IoT) solutions company has appointed a new head of global risk as part of its expanding security operations. And the recruitment bolsters the firm’s fledgling reputation in asset recovery, which has amounted to a staggering value of over £5m across the past four years. Seeking to enhance business protection following a rapid increase in high-value asset attacks across the UK, Smarter Technologies has enlisted Mark Roche into its ranks. A highly trained specialist with over 20 years of experience in covert operations, Roche is considered one of an elite group trained to manage high-value asset recovery operations. “In the cat-and-mouse game of fighting criminal activity, it’s our goal to always be one step ahead of the criminals,” comments Roche on his appointment. “We are building and strengthening an expert team to tackle asset recoveries across a global network.” Smarter Technologies already works with recognised institutions such as the Ministry of Defence and Royal Airforce, delivering real-time asset tracking through its secure Orion Data Network. And now with Roche among the ranks, the business is set to excel in helping SME's, blue chip companies and police forces across the country. “The issue is that police forces sometimes don’t have the resources to deal with asset theft" adds Roche. “When it comes to their resource allocation and prioritisation between the theft of an asset and a case that involves potential preservation of life, the latter will always come first.” “My process involves liaising with police force wherever that asset was stolen from, working with the troops on the ground and locating the tracker to a precise location.” One police force that is already familiar with the benefits Roche can bring in his field is Essex and Kent Police. “I have worked with Mark Roche for a number of years, and he has had significant impact in reducing crime and improving recovery rates,” says Jason Hendy, head of serious and organised crime, Essex and Kent Police. “Mark has helped catch more criminals with his equipment than any other company we know. He is also known now as an expert tracker across many police forces and trains officers on tracking. “We look forward to continuing our association to ensure we maintain our position at the forefront of detection and the successful prosecution of criminal behaviour.”

'Not all Micro Data centres are created equal'- Zella DC
Zella DC has recently beat out several global data heavy-weights to secure a multi-country deal with a leading US pharmaceutical group, proving that even the well-established IT infrastructure market is now ripe for disruption. Recognising that fixed server room infrastructure is costly, inefficient and difficult to scale, Zella DC founders Angie and Clinton Keeler have been on a 10-year journey to deliver the next generation of micro data centres. This journey is now bearing fruit with Zella DC securing a number of global enterprise accounts that will see Zella DC micro data centres deployed in six continents across a wide range of industries. The recent win for Zella DC in the US adds to a growing list of global organisations, including BT, Chevron, BHP, and Austal, that see Zella DC data centres as critical to their future data management strategies. Angie Keeler, Zella DC’s Co-Founder and CEO says: “With the increasing popularity of hyper-converged IT infrastructure, the majority of our customers are looking for data centre solutions that are fast, secure, scalable and cost effective, and Zella DC ticks all of those boxes.” Providing massive energy efficiencies and rapid scalability options, the Australian-made Zella DC micro data centres are ideally positioned within the edge computing industry, a global market that is forecast to reach almost US$16B by 2025. Clinton Keeler, Zella DC’s Co-Founder and CTO comments: “The Zella DC modular micro data centres give our customers the flexibility to supplement or completely replace their existing fixed server room infrastructure, and be up and running within days instead of months.” Angie Keeler concludes: “The growing adoption of the Zella DC technology, especially by major global players, has really validated that we are moving in the right direction and we fully expect our growth to accelerate as more companies see the benefits of the Zella DC solutions.”



Translate »