Managing Edge Deployment of Large Deep Learning Models in Industry
An increasing number of companies see the potential of AI at the 'edge' of the network, i.e., performing local data processing to enable real-time decisions without relying on the cloud. At the same time, modern deep-learning models are growing larger at a rapid pace, making it challenging to deploy them on edge devices with limited computing power and memory.
On top of that, companies often struggle with:
View the presentation from the first consortium meeting on March 18, 2025 to learn more about our approach and solutions.
View Kickoff PresentationWithin the MEDLI project (Managing Edge Deployment of Large Deep Learning Models in Industry), we tackle these challenges and provide practical, industry-ready solutions. We consolidate state-of-the-art knowledge into a user-friendly approach that allows companies to: