Move Fast and Make AI Good
Nearly half of large enterprises are actively deploying Gen AI to automate and optimize core tasks using their custom private data1. As AI delivers on the promise of creating more efficiency and better business outcomes, the competition for creating AI applications is heating up and companies are making huge investments in this field. Now more than ever, it is critical that companies have the ability to develop their AI models as fast and as accurately as possible. Of course, that’s a lot easier said than done.
Workflows associated with data science and Gen AI, like large language models (LLMs), are rapidly evolving. Many new software applications and workstreams are heavily reliant on powerful CPU and GPU components, especially as datasets grow in both size and complexity. Because of how fast the market is developing, AI professionals are having to do more, with less time, money, and people as resources get stretched thin to meet increasingly aggressive goals and AI project deadlines. Too many people simply have to move fast, make AI, and make AI good.
Just delivering AI capabilities is not enough. Models must be vetted and vetted again to mitigate unintended biases, and minimize, if not eradicate, hallucinations, as well as test for efficacy and security. So much sensitive data is being used to create such powerful applications that organizations must aim for and achieve the highest ethical standards.
In creating powerful new AI applications, many have gravitated to the cloud. However, as organizations move from public to private foundation models, some are having to rapidly re-think this move as using powerful CPU or GPU instances in popular cloud service providers can be cost prohibitive at scale and have a massive impact on both workflow agility, time to insight, and budget efficiency. As LLM model training and fine-tuning workloads increase, customers of all sizes are being forced to re-address how and where they use AI.
The Right Tool for the AI Job
Just like high-performance automobiles are crafted from the highest quality materials, AI models are fashioned from expertly calibrated data. AI data preparation, data cleansing, data visualization, model selection, model scoring, model training, and fine-tuning are all time-consuming tasks that can occupy up to 85% of data scientists’ workflow2. This is where workstations excel.
The workstation, by nature of its high-performance CPUs and GPUs, is designed to drive AI development, fine-tuning, and training at a smaller scale and cost than in the cloud. Having the data on-prem, close to the source, not only is more secure, but also allows for closed loops and faster innovation cycles as data scientists training AI models have more speed and flexibility to iterate; reducing the time to insight.
The use of LLMs in the enterprise is exploding. LLMs require increasingly large amounts of GPU video memory and powerful NVIDIA GPUs, like the NVIDIA RTX™ 6000 Ada Generation, that features 48GB of VRAM to help train large enterprise datasets. These powerful NVIDIA RTX professional GPUs are featured throughout Lenovo’s ThinkStation and ThinkPad portfolio and allow organizations to work on their own data in a secure sandbox, personal supercomputing environment.
OpenBCI, a pioneer in the wearable brain-computer-interfacing, used Lenovo workstations to help build AI-enabled tools for biosensing and neuroscience. During Lenovo Tech World 2023, Joseph Artuso, Chief Commercial Officer of OpenBCI, introduced Galea, the first device to integrate EEG, EMG, EDA, PPG, and eye-tracking into a single headset. By combining a multi-modal sensor system with the immersion of virtual reality, Galea gives researchers and developers a powerful new tool for understanding the human mind and body, as well as creating solutions that respond to it. Galea, with all its sensors, creates oceans of data. The OpenBCI team used Lenovo ThinkStation workstations to collect, analyze, and process data to move its projects forward more quickly.
Ultimately, it’s about having the power to analyze your data in real-time, and to develop, train, and fine-tune your AI models as quickly and as efficiently as possible. Workstations excel with performance, agility, and cost efficiency allowing AI innovators to fail fast and cheap.
Properly planning AI projects is key, and making sure you allocate the right hardware, software, and skillset resources to projects is hyper critical. At each stage, making sure you’re leveraging the best technology will ultimately clear a much faster pathway to success.
The enterprise needs end-to-end solutions that bring together accelerated systems, AI software, and expert services to quickly build and run custom AI models using their own data. The new Lenovo AI Professional Services Practice enables enterprises to use a hybrid cloud approach.
In conjunction with the Lenovo AI Professionals Services Practice, Lenovo offers a powerful and expanding portfolio of AI workstations. The ThinkStation PX features a thermally advanced, desktop- and rack-optimized chassis co-designed with Aston Martin. The workstation powerhouse runs the most complex computing workloads seamlessly—whether desk-side or in a data center. The ThinkStation PX is the only workstation on the market supporting two Intel Xeon 60C scalable processors, up to four NVIDIA RTX6000 Ada Generation GPUs, and 4TB of system memory. The ThinkStation PX is the ultimate heterogenous AI developer workstation, perfect for AI practitioners with the most demanding machine learning, deep learning, and data analytics workloads, it is deployable on desktops or can be accessed remotely.
The ThinkStation P7 and P5 are also suitable for the demanding and agile AI workloads. Supporting dataset sizes up to twice the size of previous data science workstations. And, for when mobility is required, Lenovo offers the ThinkPad P series, delivering the fastest CPU and GPU capabilities with support for up to 192GB of system memory. These surprisingly powerful mobile workstations can handle compute-intensive needs — anywhere, anytime.
Data scientists and AI developers need more than just quality hardware, they need flexibility in supporting, securing, and purchasing compute solutions. Lenovo workstations feature an extensive selection of independent software vendor (ISV) certifications and are fully qualified for a wide variety of both enterprise and open-source operating systems, AI software tools, and frameworks — including Red Hat, Ubuntu, Fedora, Debian, and Rocky Linux, plus many more. Lenovo workstations also support Lenovo’s ThinkShield™ security offerings which provide comprehensive end-to-end protection from BIOS to cloud.
Managing these high-performance AI workstation investments has been simplified through Lenovo TruScale™ to streamline procurement, deployment and management of fully integrated AI Ready IT solutions – All delivered as a service with the simplicity of a scalable, pay-as-you-go model. Perfect for those migrating from public cloud to private on-prem compute.
AI is a journey, not a destination, and Lenovo is here to support customers on their journey as their trusted technology partner. As the world’s #1 enterprise PC manufacturer3 world’s #1 top 500 supercomputer manufacturer4 and providers of the world’s most efficient high-performance compute solutions5. Lenovo is here to support customers AI needs no matter where or when they are executed.
1 TBR 1H23 Cloud Infrastructure & Platforms Customer Research
2 Anaconda – State of Data Science Report, 2022
3 IDC – Global PC Shipments Decline Again in the Third Quarter of 2023 Amid Signs of Market Improvement, According to IDC Tracker
4 Top 500 – Spoiler Alert: Lenovo is Still the #1 Global Provider of Supercomputers
5 Top 500 – Green 500, June 2023