White Paper
Leveraging OpenVINOTM Toolkit for AI Inference in Medical and Industrial Imaging to Overcome Size and Cost Challenges
Leveraging OpenVINOTM Toolkit for AI Inference in Medical and Industrial Imaging to Overcome Size and Cost Challenges
Tokyo Electron Device (TED) leveraged Intel’s OpenVINO™ toolkit to accelerate AI inference on Intel® CPUs and iGPUs, eliminating the need for external GPU cards in industrial imaging systems. Faced with space, cost, and long-term availability challenges, TED’s client achieved over 10x faster inference using OpenVINO™ with Intel® Core™ processors. FAST, a TED Group member, similarly enhanced AI-based visual inspection without GPUs, doubling inference speed by upgrading to 12th Gen Intel® CPUs. These solutions show that high-performance AI can run efficiently on CPUs, expanding cost-effective edge AI applications in manufacturing and medical imaging.