With a data center in Los Angeles as well as other major metropolitan centers, Telehouse must stay at the forefront of the methods in which AI and deep learning neural networks are shaping the data center needs of the present and future. As the use of AI in the data center becomes more prevalent, the number of enterprise and hyperscale data centers that utilize AI and deep neural networks (DNNs) for massive amounts of data are growing.
Leveraging neural networks is increasingly seen as a fundamental part of digital transformation. It’s growing prevalence can be seen in a recent Information Week article explaining how it is being applied in marketing, retail, finance, and operations management across almost every sector.
Because neural networks use vast amounts of data, they require servers capable of extreme amounts of data computations in record time. Consequently, GPUs designed to enable this level of computational speed and volume are quickly being developed and adopted by data centers around the world.
Data centers that support these new high-performance GPU-based servers can deliver greater efficiency and performance and use less power for advanced workloads while decreasing the data center footprint and power consumption needs. For example, Nvidia’s new single server capable of two petaflops of computing power does what currently takes hundreds of servers networked into clusters. The leading GPU developer’s DGX-2 system is aimed primarily at deep learning applications.
While hyperscale data centers have been the traditional users of neural network-focused GPUs, collocation providers are increasingly partnering with major cloud providers that make this capability part of their offering. They can then offer this capability in their data centers for clients in need of providing their developers with cloud infrastructure services that enable them to build AI features into their own applications. This use is prevalent for companies that are in need of High Performance Computing (HPC) for big data.
The use of cloud hardware in the data center that is designed for neural-network training and inferencing continues to accelerate with Microsoft using FPGAs to accelerate these workloads. Click here to visit original source.
Contact Details:
Telehouse America
7 Teleport Drive,
Staten Island,
New York, USA 10311
Phone No: 718–355–2500
Email: gregory.grant@telehouse.com
0 comments:
Post a Comment