Home > Moving from AI Priority to AI Reality

Top 6 Characteristics to Look for in a Data Foundation for AI

At the core, artificial intelligence and machine learning initiatives are only as good as their underlying data. It is no surprise that data integration is cited as the No. 1 challenge associated with the data pipeline for AI, and data preparation is not far behind, according to research from Enterprise Strategy Group.

But as vital as data collection and preparation are in AI, they are just two of the basic building blocks to successful real-world AI. The underlying infrastructure platform must do much more than integrate and prepare data, especially when you consider the massive amounts of data required for AI to be truly meaningful and useful.

For IT and business decision-makers—as well as data scientists, data architects and others involved in building AI solutions—it is important to address vital issues around speed, performance, capacity, integration, simplicity, scalability and infrastructure optimization.

In this article, we look at six of the key features and capabilities required to achieve real-world AI success.

Factor No. 1: All data, all the time. AI is all about the data, and for AI implementations to be successful, the underlying infrastructure must be capable of handling massive amounts of data and massively large data sets, of both structured and unstructured varieties. This means there is no cold data in the AI world. Furthermore, a data platform must be built specifically around file- and object-based storage, with the ability to manage every element of the pipeline, from data ingestion and cleansing to modeled results.

Factor No. 2: Overall performance. The AI infrastructure needs to be able to ingest and act upon those massive amounts of data at tremendous speed to keep the GPUs or neural network ASICs busy. This means no latency and no compromises on IOPS or bandwidth. In real-world AI, data has to move at the speed of business—and sometimes even faster so the insights it derives can automatically impel the business forward.

Factor No. 3: Metadata performance. Metadata performance is a key AI factor. Any system supporting AI and deep learning also needs to be able to handle the large amount of metadata these environments create. This large metadata set is mostly the result of the billions (potentially trillions) of files it stores, not the sophisticated information stored within it. If you use a separate metadata controller, eventually you will hit a scale limitation that forces the AI workloads to split. Splitting workloads increases costs and inefficiencies and makes operations more difficult.

Factor No. 4: Don’t forget the code. In an AI environment, you need to manage your code repository as data, not just the videos, images and logs. Building the right data foundation therefore involves bringing in source code and build for developer platforms and systems such as GitHub, Jenkins and JFrog. It also means adding data science components such as Jupyter into the shared storage platform. When developers are collaborating, building and sharing applications, they have to easily share their code.

Factor No. 5: Linear scalability. One of the biggest challenges for AI deployments is in moving from the pilot or prototype stage and into a production environment. Your infrastructure platform should feature elastic scalability, with the ability to test and build AI training workloads that can easily and nondisruptively scale into production environments. With the right solution, AI teams should be able to grow their environments as needed, relying upon scalable networking performance that is simple to manage.

Factor No. 6: Deployment and user simplicity. AI is complicated enough, and there is already a shortage of qualified experts in all phases, from data scientists to IT personnel experienced in building AI infrastructure platforms. At this stage of the market’s development, there is really no advantage to building your own architecture when you can leverage a preintegrated solution such as FlashStack for AI, which delivers an end-to-end AI pipeline incorporating state-of-the-art components such as Cisco UCS C480 ML servers, the Pure Storage FlashBlade storage platform and Cisco Nexus switches for the massive bandwidth required by AI applications.

Accelerate AI Data Pipeline - Flashstack™ for AI

Discover how Flashstack™ for AI can power the data pipeline at every stage and reduce deployment risks while remaining flexible enough to scale-out to meet any increased workloads.

Download Now

Your success in adding AI capabilities to existing workloads and applications and developing new applications will depend on which technologies you use to manage your data pipeline.

Existing legacy infrastructure was not designed to meet the unique challenges of AI, specifically in managing modern data and high-performance GPUs and, at the same time, delivering the necessary performance, capacity and scalability. 

Fortunately, there are solutions designed specifically to address these challenges without forcing you to build brand new infrastructure from scratch. With the right platform, your organization can accelerate its use of AI and move more easily from prototype to production.


Data Center