Advertise on DCNN Advertise on DCNN Advertise on DCNN
Friday, June 13, 2025

Data


ZTE enhances all-optical networks with smart ODN solutions
Hans Neff, Senior Director of ZTE CTO Group, recently delivered a keynote speech entitled ‘Embracing the AI Era to Accelerate the Development of All-Optical Networks’ at the FTTH Conference 2025 in Amsterdam, sharing the company's solutions and experience in focusing on AI empowerment to achieve intelligent ODN, improve the deployment efficiency of all-optical networks, and reduce deployment and O&M costs. At present, the deployment of all-optical networks is continuously accelerated, and the rapidly developing AI is increasingly integrated into broadband IT systems, significantly improving the intelligent level of FTTx deployment, operation and maintenance. In Europe, the National Broadband Plan (NBP) and the EU policies are actively advocating for an all-optical, gigabit and digitally intelligent society, and the demand for sustainable, efficient and future-oriented network infrastructure is growing. Hans pointed out that operators are facing challenges such as high costs, complex network management and diverse deployment scenarios. AI and digital intelligence can provide end-to-end support for ODN construction in multiple aspects. Hans described how ZTE leverages AI technology to build intelligent ODNs across the entire process, enhancing efficiency and reducing operational complexity. He elaborated on the specific applications and advantages of intelligent ODN across three phases: before, during and after deployment. Before deployment, one-click zero-touch site surveys, AI-based automatic planning and design, and paperless processes replace traditional manual methods, significantly reducing the time and human resources required. During deployment, real-time visualised and cost-effective solutions, along with tailored product combinations - such as pre-connectorised splice-free ODN, plug-and-play deployment and air-blowing solutions - ensure smooth construction while reducing both time and labour costs. After deployment and delivery, intelligent real-time ODN network detection, analysis and risk warnings minimise fault location time, improving monitoring accuracy and troubleshooting efficiency. Additionally, new solutions like passive ID branch detection enable 24/7 real-time optical link monitoring, dynamic perception of link changes and automatic updates, ensuring 100% accuracy of end-to-end ODN resources. This proactive approach optimises network performance while streamlining maintenance and resource management. Hans emphasised that collaboration among multiple parties is key to accelerating the construction of intelligent ODN networks. Suppliers play a vital role in driving digital and intelligent platforms, developing innovative product portfolios and delivering localised solutions through strategic partnerships. By leveraging AI and digital intelligence, operators and stakeholders can effectively address challenges such as high costs, complex construction and maintenance, and regional adaptability. ZTE says it is committed to providing comprehensive intelligent ODN solutions and building end-to-end all-optical networks. Moving forward, ZTE will continue collaborating with global partners to harness its advanced network and intelligent capabilities. ZTE aims to help operators build and operate cutting-edge, efficient and intelligent broadband networks, unlocking new application scenarios and fostering a ubiquitous, green and efficient era of all-optical networks.

STT GDC India launches AI-ready campus in Kolkata, India
ST Telemedia Global Data Centres (India) (STT GDC India) is set to revolutionise the data centre landscape in Eastern India with the launch of its state-of-the-art AI-ready campus in New Town, Kolkata, India. Spanning 5.59 acres, this next-generation campus is engineered to support the growing demands of AI computing with high-density rack configurations, advanced cooling systems, and a scalable, modular design. It aligns with the larger economic goals of the country to promote digitally enabled growth and broaden access to sustainable digital infrastructure. The new age data centre facility has earned the prestigious TIA-942 Rated-3 Design certification, underscoring its commitment to world-class infrastructure and reliability. The campus provides a significant boost to digital infrastructure creation in the eastern part of the country with scalable capacity of up to 25MW in terms of overall IT load. It incorporates forward-thinking power architecture with an N+2C design for reliability and a radial N+N configuration for main power incomers, ensuring dedicated feeder availability. The campus utilises TYPE-TESTED Compact Substations and LV DGs, setting new standards in power reliability and efficiency. Bimal Khandelwal, CEO of STT GDC India, says, "This expansion is a gateway to accelerating AI innovation in Eastern India. Our Kolkata campus is specifically designed to support the burgeoning AI ecosystem, from startups developing local language AI models to enterprises deploying large language models. The facility’s high-performance computing capabilities and low-latency connectivity will empower organisations to build and deploy AI solutions that drive digital transformation across sectors”. The facility is built with a concurrently maintainable infrastructure ensuring zero Single Points of Failure (SPOF). It boasts a modular design with flexibility for liquid cooling technologies, supporting the next generation of high-performance computing workloads. The Kolkata data centre prioritises sustainability with a low-PUE (Power Usage Effectiveness) cooling design, incorporating water conservation techniques through closed-loop cooling, rainwater harvesting and greywater reuse. The facility also employs low-GWP refrigerants to reduce carbon footprint, reinforcing STT GDC India's commitment to environmental responsibility. Having launched in March 2025, this Kolkata facility expands STT GDC India's nationwide footprint to 30 data centres across 10 cities with a total IT load capacity of 400MW. Its strategic location in New Town’s Silicon Valley positions it as a crucial hub for AI development, serving enterprises, hyperscale cloud service providers and government organisations. This investment aligns with India's growing focus on artificial intelligence and the increasing demand for AI-ready digital infrastructure. The facility will support diverse AI-driven initiatives, from natural language processing in regional languages to computer vision applications in manufacturing and healthcare, ensuring high reliability, energy efficiency and environmental sustainability.

Raxio lands $100m to expand sub-Saharan African data centres
Raxio Group has signed an agreement for $100 million in financing from the International Finance Corporation (IFC) to accelerate Raxio’s expansion of data centres to power key technologies like AI, cloud computing and digital financial services – critical enablers of African economic growth and digital inclusion. The debt funding from IFC will help Raxio double its deployment of high-quality colocation data centres within three years, addressing growing demand in underserved markets across the continent. The company is developing a Sub-Saharan African regional data centre platform in countries including Ethiopia, Mozambique, the Democratic Republic of Congo, Côte d’Ivoire, Tanzania and Angola. Raxio is committed to bridging Africa’s digital divide by introducing Tier III-certified, carrier-neutral, and secure data services to markets that have been overlooked by other providers. With a focus on high-growth areas, the company is tapping into regions with significant economic potential to unlock new opportunities across the continent. “Raxio’s business model shows how digital infrastructure can empower businesses, governments and communities to thrive in the digital economy,” says Sarvesh Suri, IFC Regional Industry Director, Infrastructure and Natural Resources in Africa. “This partnership between Raxio and IFC is set to strengthen Africa’s digital ecosystem and catalyse further investments and regional integration, building a more inclusive and sustainable future.” “This funding from IFC is a powerful endorsement of Raxio’s vision and operational excellence,” says Robert Skjødt, CEO of Raxio Group. “It will allow us to bring critical infrastructure to the regions that need it most and attract further investment as we continue to grow. Together with our other partners, we’re building the foundation for Africa’s digital future and setting new benchmarks for sustainability.” Raxio’s facilities are designed for 24/7 reliability, ensuring uninterrupted service even during maintenance or unforeseen disruptions. The company integrates renewable energy solutions to minimise its environmental footprint and uses innovative energy-efficient equipment to reduce electricity and water consumption for cooling in several of its countries of operation. In the Democratic Republic of Congo, Raxio’s Kinshasa facility is poised to meet growing demand for data services in one of Africa’s largest and fastest-growing urban centres. In Côte d’Ivoire, Raxio is establishing a digital hub to serve Francophone West Africa, connecting regional markets and facilitating cross-border trade. These efforts are empowering local businesses and integrating them into the global digital economy.

Industry experts react to World Backup Day
Today, 31 March, marks this year's World Backup Day, and industry experts say that it once again offers a timely reminder of how vulnerable enterprise data can be. Fred Lherault, Field CTO at Pure Storage, says that businesses cannot afford to think about backup just one day, every year, and predicts that 2025 could be a record-setting year for ransomware attacks. Commenting on the day, Fred says, “31 March marks World Backup Day, serving as an important reminder for businesses to reassess their data protection strategies in the wake of an ever-evolving, and ever-growing threat landscape. However, cyber attackers aren’t in need of a reminder, and are probing for vulnerabilities 24/7 in order to invade systems. Given the valuable and sensitive nature of data, whether it resides in the public sector, healthcare, financial services or any other industry, businesses can’t afford to think about backup just one day per year. “Malware is a leading cause of data loss, and ransomware, which locks down data with encryption rendering it useless, is among the most common forms of malware. In 2024, there were 5,414 reported global ransomware attacks, an 11% increase from 2023. Due to the sensitive nature of these kinds of breaches, it’s safe to assume that the real number is much higher. It’s therefore fair to suggest that 2025 could be a record setting year for ransomware attacks. In light of these alarming figures, there is no place for a ‘it won’t happen to me’ mindset. Businesses need to be proactive, not reactive in their plans - not only for their own peace of mind, but also in the wake of new cyber resiliency regulations laid down by international governments. “Unfortunately, while backup systems have provided an insurance policy against an attack in the past, hackers are now trying to breach these too. Once an attacker is inside an organisation’s systems, they will attempt to find credentials to immobilise backups. This will make it more difficult, time consuming and potentially expensive to restore.” Meanwhile, Dr. Thomas King, the CTO of global internet exchange operator, DE-CIX, offers his own remarks about the occasion. Thomas explains, “World Backup Day has traditionally carried a very simple yet powerful message for businesses: backup your data. A large part of this is 'data redundancy' – the idea that storing multiple copies of data in separate locations will offer greater resilience in the event of an outage or network security breach. Yet, as workloads have moved into the cloud, and AI and SaaS applications have become dominant vehicles for productivity, the concept of 'redundancy' has started to expand. Businesses not only need contingency plans for their data, but contingency plans for their connectivity. Relying on a single-lane, vendor-locked connectivity pathway is a bit like only backing your data up in one place – once that solution fails, it’s game over. “In 2025, roughly 85% of software used by the average business is SaaS-based, with a typical organisation using 112 apps in their day-to-day operations. These cloud-based applications are wholly dependent on connectivity to function, and even minor slow-downs caused by congestion or packet loss on the network can kill productivity. This is even more true of AI-driven workloads, where businesses depend on low-latency, high-performance connectivity to generate real-time or near real-time calculations. “Over the years, we have been programmed to believe that faster connectivity equals better connectivity, but the reality is far more nuanced. IT decision-makers frequently chase faster connections to improve their SaaS or AI performance, but 82% severely underestimate the impact of packet loss and the general performance of their connectivity. This is what some refer to as the 'Application Performance Trap' – expecting a single, lightning-fast connection to solve all performance issues. But what happens if that connectivity pathway becomes congested, or worse, fails entirely? “This is why 'redundant' connectivity is essential. The main principle of redundancy in this context is that there should always be at least two paths leading to a destination – if one fails, the other can be used. This can be achieved by using a carrier-neutral Internet Exchange or IX, which facilitates direct peer-to-peer connectivity between businesses and their cloud-based workloads, essentially bypassing the public Internet. While IXs in the US were traditionally vendor-locked to a single carrier or data centre, neutral IXs allow businesses to establish multiple connections with different providers – sometimes to serve a particular use-case, but often in the interests of redundancy. Our research has shown that more than 80% of IXs in the US are now data centre and carrier neutral, presenting a perfect opportunity for businesses to not only back up their data, but also back up their connectivity this World Backup Day.” To read more about World Backup Day, visit its official website by clicking here. For more from Pure Storage, click here. For more from DE-CIX, click here.

Keysight and Coherent to enhance data transfer rates
Keysight Technologies and Coherent Corp have collaborated on a 200G multimode technology demonstration that will be showcased for the first time at the OFC Conference in San Francisco, California. The 200G per lane vertical-cavity surface-emitting laser (VCSEL) technology provides higher data transfer rates and addresses industry demand for higher bandwidth in data centres. It will enable the industry to deliver AI/ML services while reducing power consumption and capital expense of short-reach data interconnects. AI/ML deployment is driving extreme growth in the amount of data transfer in data centres. The cost of optical interconnects is an ever-growing portion of the Capex of the data centre, while the power consumption of the optical interconnects is an ever-growing portion of Opex. 200G multimode VCSELs revolutionise data transfer and network efficiency, offering the following benefits compared to single-mode transmission: · Increased bandwidth: 200 Gbps/lane doubles the data throughput of the current highest speed multimode interconnects.· Power efficiency: Significantly lower power-per-bit relative to single-mode alternatives driving down electrical power operational expense and helping large-scale data centres minimise their environmental impact.· Cost efficiency: Multimode VCSELs are less costly to manufacture than single-mode technologies, providing lower capital outlay for short-reach data links.· Compatible network architecture: AI pods and clusters require many high-speed, short-reach interconnects to share data amongst GPUs, aligning well with the strength of 200G multimode VCSELs. The 200G multimode demonstration consists of Keysight’s new wideband multimode sampling oscilloscope technology, Keysight’s M8199B 256 GSa/s Arbitrary Waveform Generator (AWG) and Coherent’s 200G multimode VCSEL. The M8199B AWG drives a 106.25 GBaud PAM4 electrical signal into the Coherent VCSEL, and the PAM4 optical signal output from the VCSEL is measured on wideband multimode scope displaying the eye diagram. The demo showcases the feasibility and capability of Coherent’s new VCSEL technology, as well as Keysight’s ability to characterise and validate this technology. Lee Xu, Executive Vice President and General Manager, Datacom Business Unit at Coherent, says, “Keysight has been a trusted partner and a leader in test instrumentation technology, providing advanced test solutions to us. We rely on Keysight products, such as the M8199B Arbitrary Waveform Generator, to validate our latest transceiver designs. We look forward to continuing our collaboration as we push the boundaries of optical communications with products based on 200G VCSEL, silicon photonics, and Electro-Absorption Modulated Laser (EML).” Dr. Joachim Peerlings, Vice President and General Manager of Keysight’s Network and Data Centre Solutions Group, adds, “We are pleased with the progress the industry is making in bringing 200G multimode technology to market. Our close collaboration with Coherent enabled another milestone in high-speed connectivity. The industry will benefit from a more efficient and cost-effective technology to address their business-critical AI/ML infrastructure deployment in the data centre.” Join Keysight experts at OFC, taking place 1-3 April in San Francisco, California, at Keysight's booth (stand 1301) for live demos on coherent optical innovations. For more from Keysight Technologies, click here.

Broadband Forum launches trio of new open broadband projects
An improved user experience, including reduced latency and a wider choice of in-home applications, will be delivered to broadband consumers as the Broadband Forum launches three new projects. The three new open broadband projects will provide open source software blueprints for application providers and Broadband Service Providers (BSPs) to follow. These will deliver a foundation for Artificial Intelligence (AI) and Machine Learning (ML) for network automation, additional tools for network latency and performance measurements, and on-demand connectivity for different applications. “These new projects will play a key role in improving network performance measurement and monitoring and the end-user experience,” says Broadband Forum Technical Chair, Lincoln Lavoie. “Open source software is a crucial component in providing the blueprint for BSPs to follow and we invite interested companies to get involved.” The new Open Broadband-CloudCO-Application Software Development Kit (OB-CAS), Open Broadband – Simple Two-Way Active Measurement Protocol (OB-STAMP), and Open Broadband – Subscriber Session Steering (OB-STEER) projects will bring together software developers and standards experts from the forum. The projects will deliver open source reference implementations, which are examples of how Broadband Forum specifications can be implemented. They act as a starting point for application developers to base their designs on. In turn, those applications are available on platforms for BSPs to select and offer to their customers, shortening the path between the development of the specification to the first deployment of the technologies into the network.  “The development of open source software and open broadband standards are invaluable to the industry, laying the foundations for faster innovation through global collaboration,” says Broadband Forum CEO, Craig Thomas. “The Broadband Forum places the end-user experience at the forefront of all of our projects and is playing a crucial role in overcoming network issues.” OB-CAS aims to simplify network monitoring and maintenance for BSPs, while also offering a wider selection of applications from various software vendors. Alongside this, network operations will be simplified and automated through existing Broadband Forum cloud standards that use AI and ML to improve the end-user experience. OB-STAMP will build an easy-to-deploy component that simplifies network performance measurement between Customer Premises Equipment and IP Edge. The project will allow BSPs to proactively monitor their subscribers’ home networks to measure latency and, ultimately, avoid network failures. Four vendors have already signed up to join the efforts to reduce the cost and time associated with deploying infrastructure for measuring network latency. Building on Broadband Forum’s upcoming technical report WT-474, OB-STEER will create a reference implementation of the Subscriber Session Steering architecture to deliver flexible, on-demand connectivity and simplify network management. Interoperability of Subscriber Session Steering is of high importance as it will be implemented in the access network equipment and edge equipment from various vendors.

Five considerations when budgeting for enterprise storage
By Eric Herzog, Chief Marketing Officer at Infinidat. Enterprise storage is fundamental to maintaining a strong enterprise data infrastructure. While storage has evolved over the years, the basic characteristics remain the same – performance, reliability, cost-effectiveness, flexibility, capacity, flexibility, cyber resilience, and usability. The rule of thumb in enterprise storage is to look for faster, cheaper, easier and bigger capacity, but in a smaller footprint. So, when you’re reviewing what storage solutions to entrust your enterprise with, what are the factors to be considering? What are the five key considerations that have risen to the top of enterprise storage buying decisions? • Safeguard against cyber attacks, such as ransomware and malware, by increasing your enterprise’s cyber resilience and cyber recovery with automated cyber protection.• Look to improve the performance of your enterprise storage infrastructure by up to 2.5x (or more), while simultaneously consolidating storage to save costs.• Evaluate the optimal balance between your enterprise’s use of on-premises and the use of the public cloud (i.e. Microsoft Azure or Amazon AWS).• Extend cyber detection across your storage estate.• Initiate a conversation about infrastructure consumption services that are platform-centric, automated, and optimised for hybrid, multi-cloud environments. The leading edge of enterprise storage has already moved into the next generation of storage arrays for all-flash and hybrid configurations. With cybercrime expected to cost an enterprise in excess of £7.3 trillion in 2024, according to Cybersecurity Ventures, the industry has also seen a rise in cybersecurity capabilities being built into primary and secondary storage. Seamless hybrid multi-cloud support is now readily available. And enterprises are taking advantage of Storage-as-a-Service (STaaS) offerings with confidence and peace of mind. When you’re buying enterprise storage for a refresh or for consolidation, it’s best to seek out solutions that are built from the ground-up with cyber resilient and cyber recovery technology intrinsic to your storage estate, optimised by a platform-native architecture for data services. In today’s world with continuous cyber threats, enterprises are substantially extending cyber storage resilience and recovery, as well as real-world application performance, beyond traditional boundaries. We have also seen our customers value scale-up architectures, such as 60%, 80% and 100% populated models of software-defined architected storage arrays. This can be particularly pertinent with all-flash arrays that are aimed at specific latency-sensitive applications and workloads. Having the option to utilise a lifecycle management controller upgrade program is also appealing when buying a next-generation storage solution. Thinking ahead, this option can extend the life of your data infrastructure. In addition, adopting next-gen storage solutions that facilitate a GreenIT approach puts your enterprise in a position to both save money (better economics) and reduce your carbon emissions (better for the environment) by using less power, less rack space, and less cooling. I call this the “E2” approach to enterprise storage: better economics and a better environment together in one solution. It helps to have faster storage devices with massive bandwidth and blistering I/O speeds. Storage is not just about storage arrays anymore Traditionally, it was commonly known that if you needed more enterprise data storage capacity, you’d buy more storage arrays and throw them into your data centre. No more thought needed for storage, right? All done with storage, right? Well, not exactly. Not only has this piecemeal approach caused small array storage 'sprawl' and complexity that can be exasperating for any IT team, but it doesn’t address the significant need to secure storage infrastructures or simplify IT operations. Cyber storage resilience and recovery need to be a critical component of an enterprise’s overall cybersecurity strategy. You need to be sure that you can safeguard your data infrastructure with cyber capabilities, such as cyber detection, automated cyber protection, and near-instantaneous cyber recovery. These capabilities are key to neutralising the effects of cyber attacks. They could mean the difference between you paying a ransom for your data that has been taken 'hostage' and not paying any ransom. When you can execute rapid cyber recovery of a known good copy of your data, then you can effectively combat the cybercriminals and beat them at their own sinister game. One of the latest advancements in cyber resilience that you cannot afford to ignore is automated cyber protection, which helps you reduce the threat window for cyber attacks. With a strong automated cyber protection solution, you can seamlessly integrate your enterprise storage into your Security Operations Centres (SOC), Security Information and Events Management (SIEM), Security Orchestration, Automation, and Response (SOAR) cyber security applications, as well as simple syslog functions for less complex environments. A security-related incident or event triggers immediate automated immutable snapshots of data, providing the ability to protect both block and file datasets. This is an extremely reliable way to ensure cyber recovery. Another dimension of modern enterprise storage is seamless configurations of hybrid multi-cloud storage. The debate about whether an enterprise should put everything into the public cloud is over. There are very good use cases for the public cloud, but there continues to be very good use cases for on-prem storage, creating a hybrid multi-cloud environment that brings the greatest business and technical value to the organisation. You can now harness the power of a powerful on-prem storage solution in a cloud-like experience across the entire storage infrastructure, as if the storage array you love on-premises is sitting in the public cloud. Whether you choose Microsoft Azure or Amazon AWS or both, you can extend the data services usually associated with on-prem storage to the cloud, including ease of use, automation, and cyber storage resilience. Purchasing new enterprise storage solutions is a journey. Isn’t it the best choice to get on the journey to the future of enterprise storage, cyber security, and hybrid multi-cloud? If you use these top five considerations as a guidepost, you end up in an infinitely better place for storage that transforms and transcends conventional thinking about the data infrastructure. Infinidat at DTX 2025 Eric Herzog is a guest speaker at DTX 2025 and will be discussing “The New Frontier of Enterprise Storage: Cyber Resilience & AI” on the Advanced Cyber Strategies Stage. Join him for unique insights on 3 April 2025, from 11.15-11.40am. DTX 2025 takes place on 2-3 April at Manchester Central. Infinidat will be located at booth #C81. For more from Infinidat, click here.

Ataccama to deliver end-to-end visibility into data flows
Ataccama, the data trust company, has launched Ataccama Lineage, a new module within its Ataccama ONE unified data trust platform (V16). Ataccama Lineage provides enterprise-wide visibility into data flows, offering organisations a clear view of their data’s journey from source to consumption. It helps teams trace data origins, resolve issues quickly, and ensure compliance - enhancing transparency and building confidence in data accuracy for business decision-making. Fully integrated with Ataccama’s data quality, observability, governance, and master data management capabilities, Ataccama lineage enables organisations to make faster, more informed decisions, such as ensuring audit readiness and meeting regulatory compliance requirements. Data challenges are increasingly complex and, according to the Ataccama Data Trust Report 2025, 41% of Chief Data Officers are struggling with fragmented and inconsistent systems. Despite significant investments in integrations, AI, and cloud applications, enterprise data often remains siloed or poor in quality. This fractured landscape obscures visibility into data transformations and flows, creating inefficiencies and operational silos. Ataccama believes that the lack of clarity hampers collaboration and increases the risk of non-compliance with regulations like GDPR, erodes customer trust, drains resources, and slows decision-making - ultimately stifling organisational growth. Ataccama Lineage is designed to simplify how organisations manage and trust their data. Its AI-powered capabilities automatically map data flows and transformations, saving time and reducing manual effort. For example, tracking customer financial data across fragmented systems is a common struggle in financial services. Ataccama Lineage provides clear, visual maps that trace issues like missing or duplicate records to their source. It also tracks sensitive data, such as PII, with audit-ready documentation to ensure compliance. By delivering reliable, trustworthy data, Ataccama Lineage establishes a strong foundation for AI and analytics, enabling organisations to make informed decisions and achieve long-term success. Isaac Gabay, Senior Director, Data Management & Operations at Lennar, says, “As one of the nation’s leading homebuilders, Lennar is continually evolving our data foundation with best-in-class, cost-effective solutions to drive efficiency and innovation. Ataccama ONE Lineage’s detailed, visual map of data flows enables us to monitor data quality, trace issues through our ecosystem, and take a proactive approach to prevent and remediate quality concerns while maintaining centralised control. Ataccama ONE Lineage will provide unparalleled visibility, enhancing transparency, data literacy, and trust in our data. This partnership strengthens our ability to scale with confidence, deliver accurate insights, and adapt to the evolving needs of the homebuilding industry.” Jessie Smith, VP of Data Quality at Ataccama, comments, "Managing today’s data pipelines means dealing with increasing sources, diverse data types, and transformations that impact systems upstream and downstream. The rise of AI and generative AI has amplified complexity while expanding data estates, and stricter audits demand greater transparency. Understanding how information flows across systems is no longer optional, it’s essential. Ataccama Lineage is part of the Ataccama ONE data trust platform which brings together data quality, lineage, observability and master data management into a unified solution for enterprise companies." Key benefits of AI-powered Ataccama Lineage include: - Faster resolution of data quality issues: Advanced anomaly detection identifies issues like missing records, unexpected values, or duplicates caused by transformation errors. For example, retail operations with multiple sales channels, mismatched pricing, or inventory discrepancies can disrupt business. Ataccama Lineage enables teams to quickly pinpoint root causes, assess downstream impacts, and resolve issues before they affect operations - ensuring continuity and reliability. - Simplified compliance: Data classification and anomaly detection enhance visibility into sensitive data, such as PII, and track its transformations. Financial organisations benefit from audit-ready documentation that ensures PII is properly traced to authorised locations, reducing regulatory risks, meeting data privacy requirements, and fostering customer trust with transparent processes. - Comprehensive visibility into data flows: Lineage maps provide a detailed, enterprise-wide view of data flows, from origin to dashboards and reports. Teams in sectors like manufacturing can analyse the lineage of key metrics, such as production efficiency or supply chain performance, identifying dependencies across ETL jobs, on-premises systems, and cloud platforms. Enhanced filtering focuses efforts on critical datasets, allowing faster issue resolution and better decision-making. - Streamlined data modernisation efforts: During cloud migrations, Ataccama Lineage reduces risks by mapping redundant pipelines, dependencies, and critical datasets. Insurance companies transitioning to modern platforms can retire outdated systems and migrate only essential data, minimising disruption while maintaining compliance with regulations like Solvency II. For more from Ataccama, click here.

AirTrunk expands with second AI-ready data centre in Johor
Asia Pacific & Japan (APJ) hyperscale data centre specialist, AirTrunk, has announced plans to develop its second cloud and AI-ready data centre in Johor, Malaysia. AirTrunk JHB2 will be located in Iskandar Puteri, Johor region. Scalable to over 270MW, JHB2 will support demand from global public cloud and technology companies in the region. The JHB2 announcement follows the opening of AirTrunk’s first data centre in Johor, 150+MW AirTrunk JHB1, in July 2024. Combined, AirTrunk is investing over RM 9.7 billion / A$3.5 billion in Malaysia, providing more than 420MW of IT load. JHB2, strategically located in a major availability zone, provides an end-to-end cross border connectivity strategy for customers and the ability to scale their operations to match demand. The additional capacity will support Malaysia’s fast-growing digital economy and follows the establishment of the landmark Johor-Singapore special economic zone (JS-SEZ). Like JHB1, the new data centre will feature AirTrunk’s state-of-the-art liquid cooling technology for managing the high-density demands of AI and will ensure significant energy savings. JHB2 is designed to meet the highest standards of efficiency and security, with a low design PUE (Power Usage Effectiveness) of 1.25 and multiple renewable energy options available to customers. To support Johor State Government’s aim to diversify water sources, AirTrunk is scoping treated greywater as a recycled sustainable water supply for its campuses’ operations. Aligned with the Malaysian Government’s focus on National Technical and Vocational Education and Training (TVET) and increasing opportunities for highly skilled workers, AirTrunk is creating jobs for Malaysians, with above market rate remuneration for AirTrunk employees, 90% local employees and career development opportunities. AirTrunk is also contributing to digital literacy programs and funding STEM education scholarships at the Universiti Teknologi Malaysia (UTM) to further support the local community over the long term. Advancing towards its net zero 2030 target, AirTrunk recently announced one of the largest onsite solar deployments for a data centre in Southeast Asia at JHB1, as well as the first renewable energy Virtual Power Purchase Agreement for a data centre for 30MW of renewable energy, under Malaysia’s Corporate Green Power Programme. AirTrunk is working with the leading Malaysian utility company, Tenaga Nasional Berhad (TNB) to connect JHB2 through TNB’s Green Lane Pathway for Data Centres initiative, streamlining high-voltage electricity supply to an accelerated timeframe of 12 months. AirTrunk is also providing land for TNB to build a new substation, adding resilience to the electricity distribution system in the area. This continuing collaboration, which started from an MoU signed in 2023, opens the door for AirTrunk to explore green solutions with TNB in efforts to advance the energy transition in the region. AirTrunk Founder & Chief Executive Officer, Robin Khuda, says, “As Malaysia establishes itself as a digital powerhouse, it is a privilege for AirTrunk to contribute to this growth over the long term and deliver shared benefit for the people of Malaysia. AirTrunk’s data centres serve as essential infrastructure that will help boost productivity and enable new products and services that can drive economic growth. “We are committed to helping realise the potential of cloud and AI in Malaysia and prioritising circularity for the benefit of society and the environment. AirTrunk is supporting local digital literacy and STEM initiatives, driving the energy transition and working to embed a sustainable water supply to make a positive impact.”

Tellus delivers key component for collaborative data economy
It has been revealed that the Gaia-X development project, Tellus, has successfully completed its implementation phase. Led by the Internet Exchange operator, DE-CIX, the consortium has developed a prototype interconnection infrastructure that provides fully automatic and virtual access to networks for sensitive, real-time applications across distributed cloud environments. Tellus covers the entire supply chain of interconnection services and integrates offerings from various providers based on the decentralised and distributed data infrastructure of Gaia-X. This makes Tellus a key component for the comprehensive connectivity required by intelligent business models in a collaborative data economy. Delivering networks and services according to application demands In the past, implementing business-critical applications in distributed IT systems required purchasing all necessary components, services, and functions separately from different providers and manually combining them in a time-consuming and costly process - without end-to-end guarantees. Tellus’ open-source software not only automates these processes but also ensures specific connectivity requirements. During the final phase, the project team implemented a controller and service registry which function as central elements of a super-node architecture. The controller coordinates and provisions service offers and orders via application programming interfaces (APIs). The service registry stores and lists all services that the controller can search through, address, and combine. The search process runs via the controller into the registry and the associated graph database, which then delivers suitable solutions. Finally, the controller commissions the interconnection infrastructure to provision network and cloud services to meet the requirements of the respective application, including guaranteed performance and Gaia-X compliance. Deployable prototype: Reliable and dynamic connectivity for data exchange In the implemented proof of concept (PoC) demo, virtual networks and services can be provided via a user-friendly interface to meet the requirements of industrial applications; for example, transmitting hand movements to a robot in real time via a smart glove. The same applies to delivering connectivity for a digital twin from IONOS in a manner required by production plants, to simulate, monitor in real-time, and optimise manufacturing steps. Equally, TRUMPF’s fully automatic laser cutting tools, where reliable and dynamic networks keep systems available and pay-per-part business models productive. Milestone for a secure, sovereign, and collaborative data economy “Since Tellus registers the products of all participants in a standardised way and stores the network nodes in a structured manner in a graph database, interconnection services can be composed end-to-end via a weighted path search,” says Christoph Dietzel, Head of Product & Research at DE-CIX. “With the successful completion of the implementation phase and the proof-of-concept demo, we have not only demonstrated the technical feasibility of our Gaia-X compliant interconnection infrastructure, but have also set an important milestone for the future of secure, sovereign, and collaborative data processing.” For more from DE-CIX, click here.



Translate »