Format
-
1. Intro
- Team – experience
- Ecosystem
- Statistics - interest in rust & AI
- Motivating for edge AI
- Workshop overview
-
2. Lecture: Deep Learning for Computer Vision
-
Introduce WasmEdge / Rust for edge devices
- High level overview of WebAssembly and the WebAssembly Component Model
- What is WasmEdge?
- An example of a WasmEdge Rust application
-
Lecture on Computer Vision
- High-level introduction to computer vision tasks / problems: classification, object detection, segmentation
- What exactly is a neural network? High-level introduction to CNNs.
- What exactly is an (image) embedding?
- Common libraries in the ecosystem (from python: PyTorch, 🤗HuggingFace, etc)
-
Introduce Rust libraries for AI
- Compare the use of Rust in deep learning compared to Python, showcase equivalent libraries
- Candle - Lightweight minimalist ML framework for Rust that allows the use of 🤗HuggingFace models
- Mediapipe-rs - for on edge Computer Vision
- WasmEdge - running LlamaEdge for deploying LLMs
-
HuggingFace (presented by HuggingFace)
- Platform
- resources, search, selection
- Deployment
- Rust & HuggingFace - Candle
- Platform
-
Introduce WasmEdge / Rust for edge devices
-
3. Hands-on: Air-gapped face recognition on Pi
Use pretrained models from mediapipe-rs and HuggingFace🤗 to build a simple Face Authentication pipeline
- Stream video input from webcam to Raspberry Pi server
- Deploy a Face Detection model on the Raspberry Pi
- Deploy an Face Embedding model on the Raspberry Pi
- Save identities to a vector database like QDrant Rust SDK
- Perform privacy-preserving on-device authentication
-
4. Lecture: Deep Learning for NLP
- Introduction to Deep Learning for NLP
- Overview of the history of NLP methods
- Development of Transformers
- How does an LLM work? Tokenizers, pre-training, post-training.
- High-level introduction to concepts about prompt engineering: Chain-of-Thought, RAG, In-Context Learning
- What exactly is a (text) embedding?
-
Introduce Rust libraries for NLP – Rust is actively used for developing the ecosystem around training LLMs (e.g., tokenizers)
- tokenizers - Rust - A Fast version of the tokenizers library used ubiquitously in the LLM ecosystem
- llama.cpp
- Introduction to Deep Learning for NLP
-
5. Hands-on: Chat with a LLM on Pi
- Deploy a small, pretrained LLM on a Raspberry Pi
- Chat with the LLM
- Stream responses from the server (async)
-
Build a simple RAG pipeline
- Embed some (given) texts using a text embedding model (i.e., BERT)
- Store texts & embeddings into QDrant vector store
- LLM can retrieve relevant passages from the vector store to answer questions
- Agentic RAG – Before retrieving passages, use the LLM to summarize passages/select the most important bits
-
6. Hands-on: Integration
Integrate both computer vision models and LLMs into a single application.
- Develop an application that can register a user using Face Authentication
- Adapt the deployed LLM to handle tool calling using system prompts
- Users can input a free-text description of their preferences, which the LLM parses into a predefined set
- Finally, users can be identified from their faces and can issue commands to the LLM assistant, and the assistant can respond based on their declared preferences.
Sponsored by
Wyliodrin
Workshop venue
Fiap Paris
