mermaid.initialize({ startOnLoad: true });</script><metacontent="Aron Petau"property=og:site_name><metacontent="AIRASPI Build Log - Aron Petau"property=og:title><metacontent=https://aron.petau.net/project/airaspi-build-log/property=og:url><metacontent="Utilizing an edge TPU to build an edge device for image recognition and object detection"property=og:description><metacontent=https://aron.petau.net/card.pngproperty=og:image><metacontent=en_USproperty=og:locale><body><headerid=site-nav><nav><ahref=#main-contenttabindex=0> Skip to Main Content </a><ul><liid=home><ahref=https://aron.petau.net><iclass=icon></i>Aron Petau</a><liclass=divider><li><ahref=https://aron.petau.net/project/>Projects</a><li><ahref=https://aron.petau.net/pages/contact/>Contact</a><li><ahref=https://aron.petau.net/pages/cv/>CV</a><li><ahref=https://aron.petau.net/pages/about/>About</a><liid=search><buttonclass=circleid=search-toggletitle=Search><iclass=icon></i></button><liid=language-switcher><detailsclass=closable><summaryclass=circletitle=Language><iclass=icon></i></summary><ul><li><ahref=https://aron.petau.net//de/project/airaspi-build-log/lang=de>Deutsch</a></ul></details><liid=theme-switcher><detailsclass=closable><summaryclass=circletitle=Theme><iclass=icon></i></summary><ul><li><buttontitle="Switch to Light Theme"class=circleid=theme-light><iclass=icon></i></button><li><buttontitle="Switch to Dark Theme"class=circleid=theme-dark><iclass=icon></i></button><li><buttontitle="Use System Theme"class=circleid=theme-system><iclass=icon></i></button></ul></details><liid=feed><detailsclass=closable><summaryclass=circletitle=Feed><iclass=icon></i></summary><ul><li><ahref=https://aron.petau.net/rss.xml>RSS</a><li><ahref=https://aron.petau.net/atom.xml>Atom</a></ul></details><liid=repo><aclass=circlehref=https://forgejo.petau.net/aron/awebsitetitle=Repository><iclass=icon></i></a></ul></nav><divid=search-container><labelclass=visually-hiddenfor=search-bar>Search</label><inputplaceholder="Search for…"autocomplete=offdisabledid=search-bartype=search><divid=search-results-container><divid=search-results></div></div></div></header><mainid=main-content><article><divid=heading><p><small><timedatetime=" 2024-01-30T00:00:00+00:00">Published on January 30, 2024</time></small><h1>AIRASPI Build Log</h1><p><small><span>By Aron Petau</span><span> • </span><span>12 minutes read</span><span> • </span></small><ulclass=tags><li><aclass=taghref=https://aron.petau.net/tags/coral/>coral</a><li><aclass=taghref=https://aron.petau.net/tags/docker/>docker</a><li><aclass=taghref=https://aron.petau.net/tags/edge-tpu/>edge TPU</a><li><aclass=taghref=https://aron.petau.net/tags/edge-computing/>edge computing</a><li><aclass=taghref=https://aron.petau.net/tags/frigate/>frigate</a><li><aclass=taghref=https://aron.petau.net/tags/local-ai/>local AI</a><li><aclass=taghref=https://aron.petau.net/tags/private/>private</a><li><aclass=taghref=https://aron.petau.net/tags/raspberry-pi/>raspberry pi</a><li><aclass=taghref=https://aron.petau.net/tags/surveillance/>surveillance</a></ul></div><divid=buttons-container><atitle="Go to Top"href=#topid=go-to-top><iclass=icon></i></a><ahref="https://shareopenly.org/share/?url=https://aron.petau.net/project/airaspi-build-log/&text=Utilizing%20an%20edge%20TPU%20to%20build%20an%20edge%20device%20for%20image%20recognition%20and%20object%20detection"id=sharetitle=Share><iclass=icon></i></a><atitle="File an Issue"href=https://forgejo.petau.net/aron/awebsite/issuesid=issue><iclass=icon></i></a></div><h2id=AI-Raspi_Build_Log>AI-Raspi Build Log</h2><p>This document chronicles the process of building a custom edge computing device for real-time image recognition and object detection. The goal was to create a portable, self-contained system that could operate independently of cloud infrastructure.<p><strong>Project Goals:</strong><p>Build an edge device with image recognition and object detection capabilities that can process video in real-ti
</code></pre><h3id=Preparing_the_System_for_Coral_TPU>Preparing the System for Coral TPU</h3><p>The Raspberry Pi 5's PCIe interface requires specific configuration to work with the Coral Edge TPU. This section was the most technically challenging, involving kernel modifications and device tree changes. A huge thanks to Jeff Geerling for documenting this process—without his detailed troubleshooting, this would have been nearly impossible.<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh># check kernel version
</code></pre><p>While in the file, add the following lines:<preclass=language-configdata-lang=config><codeclass=language-configdata-lang=config>kernel=kernel8.img
dtparam=pciex1
dtparam=pciex1_gen=2
</code></pre><p>Save and reboot:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>sudo reboot
</code></pre><preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh># check kernel version again
uname -a
</code></pre><ul><li>should be different now, with a -v8 at the end</ul><p>edit /boot/firmware/cmdline.txt<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>sudo nano /boot/firmware/cmdline.txt
</code></pre><ul><li>add pcie_aspm=off before rootwait</ul><preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>sudo reboot
</code></pre><h3id=Modifying_the_Device_Tree>Modifying the Device Tree</h3><h4id=Initial_Script_Attempt_(Deprecated)>Initial Script Attempt (Deprecated)</h4><p>Initially, there was an automated script available that was supposed to handle the device tree modifications. However, this script proved problematic and caused issues during my build.<blockquoteclass=markdown-alert-warning><p>maybe this script is the issue? i will try again without it</blockquote><preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>curl https://gist.githubusercontent.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e/raw/32d21f73bd1ebb33854c2b059e94abe7767c3d7e/coral-ai-pcie-edge-tpu-raspberrypi-5-setup | sh
</code></pre><p>Yes, it was the problematic script. I left a comment documenting the issue on the original gist: <ahref="https://gist.github.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e?permalink_comment_id=4860232#gistcomment-4860232">My comment on the gist</a><h4id=Manual_Device_Tree_Modification_(Recommended)>Manual Device Tree Modification (Recommended)</h4><p>Instead of relying on the automated script, I followed Jeff Geerling's manual approach. This method gives you complete control over the process and helps understand what's actually happening under the hood.<blockquoteclass=markdown-alert-note><p>In the meantime the Script got updated and it is now recommended again.</blockquote><p>The device tree modification process involves backing up the current device tree blob (DTB), decompiling it to a readable format, editing the MSI parent reference to fix PCIe compatibility issues, and then recompiling it back to binary format. Here's the step-by-step process:<p><strong>1. Back up and Decompile the Device Tree</strong><preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh># Back up the current dtb
</code></pre><blockquoteclass=markdown-alert-note><p>Note: msi-parent seems to carry the value <0x2c> nowadays, cost me a few hours.</blockquote><p><strong>2. Verify the Changes</strong><p>After rebooting, check that the Coral TPU is recognized by the system:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>lspci -nn | grep 089a
</code></pre><p>You should see output similar to: <code>0000:01:00.0 System peripheral [0880]: Global Unichip Corp. Coral Edge TPU [1ac1:089a]</code><h3id=Installing_the_Apex_Driver>Installing the Apex Driver</h3><p>With the device tree properly configured, the next step is installing Google's Apex driver for the Coral Edge TPU. This driver enables communication between the operating system and the TPU hardware.<p>Following the official instructions from <ahref=https://coral.ai/docs/m2/get-started#2a-on-linux>coral.ai</a>:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
</code></pre><p>This sequence:<ol><li>Adds Google's package repository and GPG key<li>Installs the gasket DKMS module (kernel driver) and Edge TPU runtime library<li>Creates udev rules for device permissions<li>Creates an <code>apex</code> group and adds your user to it<li>Reboots to load the driver</ol><p>After the reboot, verify the installation:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>lspci -nn | grep 089a
</code></pre><p>This should display the connected Coral TPU as a PCIe device.<p>Next, confirm the device node exists with proper permissions:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>ls -l /dev/apex_0
</code></pre><p>If the output shows <code>/dev/apex_0</code> with appropriate group permissions, the installation was successful. If not, review the udev rules and group membership.<h3id=Testing_with_Example_Models>Testing with Example Models</h3><p>To verify the TPU is functioning correctly, we'll use Google's example classification script with a pre-trained MobileNet model:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh># Install Python packages
</code></pre><p>The output should show inference results with confidence scores, confirming the Edge TPU is working correctly.<h3id=Docker_Installation>Docker Installation</h3><p>Docker provides containerization for the applications we'll be running (Frigate, MediaMTX, etc.). This keeps dependencies isolated and makes deployment much cleaner.<p>Install Docker using the official convenience script from <ahref=https://docs.docker.com/engine/install/debian/#install-using-the-convenience-script>docker.com</a>:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>curl -fsSL https://get.docker.com -o get-docker.sh
</code></pre><p>After installation, log out and back in for group membership changes to take effect.<p>Configure Docker to start automatically on boot:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>sudo systemctl enable docker.service
</code></pre><h3id=Test_the_Edge_TPU_(Optional)>Test the Edge TPU (Optional)</h3><p>To verify the Edge TPU works inside a Docker container, we can build a test image. This is particularly useful if you plan to use the TPU with containerized applications.<p>Create a test directory and Dockerfile:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>mkdir coraltest
</code></pre><p>Into the new file, paste:<preclass=language-Dockerfiledata-lang=Dockerfile><codeclass=language-Dockerfiledata-lang=Dockerfile>FROM debian:10
</code></pre><p>Build and run the test container, passing through the Coral device:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh># build the docker container
</code></pre><p>Inside the container, run an inference example:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh># run an inference example from within the container
</code></pre><p>You should see inference results with confidence values from the Edge TPU. If not, try a clean restart of the system.<h3id=Portainer_(Optional)>Portainer (Optional)</h3><p>Portainer provides a web-based GUI for managing Docker containers, images, and volumes. While not required, it makes container management significantly more convenient.<blockquoteclass=markdown-alert-note><p>This is optional, gives you a browser GUI for your various docker containers.</blockquote><p>Install Portainer:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>docker volume create portainer_data
</code></pre><p>Access Portainer in your browser and set an admin password:<ul><li>Navigate to: <ahref=https://airaspi.local:9443>https://airaspi.local:9443</a></ul><h3id=VNC_Setup_(Optional)>VNC Setup (Optional)</h3><p>VNC provides remote desktop access to your headless Raspberry Pi. This is particularly useful for testing cameras and debugging visual issues without connecting a physical monitor.<blockquoteclass=markdown-alert-note><p>This is optional, useful to test your cameras on your headless device. You could attach a monitor, but I find VNC more convenient.</blockquote><p>Enable VNC through the Raspberry Pi configuration tool:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>sudo raspi-config
</code></pre><p>Navigate to: <strong>Interface Options</strong> → <strong>VNC</strong> → <strong>Enable</strong><h3id=Connecting_through_VNC_Viewer>Connecting through VNC Viewer</h3><p>Install <ahref=https://www.realvnc.com/en/connect/download/viewer/>RealVNC Viewer</a> on your computer (available for macOS, Windows, and Linux).<p>Connect using the address: <code>airaspi.local:5900</code><p>You'll be prompted for your Raspberry Pi username and password. Once connected, you'll have full remote desktop access for testing cameras and debugging.<h2id=Frigate_NVR_Setup>Frigate NVR Setup</h2><p>Frigate is a complete Network Video Recorder (NVR) with real-time object detection powered by the Coral Edge TPU. It's the heart of this edge AI system.<h3id=Docker_Compose_Configuration>Docker Compose Configuration</h3><p>This setup uses Docker Compose to define the Frigate container with all necessary configurations. If you're using Portainer, you can add this as a custom stack.<blockquoteclass=markdown-alert-important><p>Important: you need to change the paths to your own paths.</blockquote><preclass=language-yamldata-lang=yaml><codeclass=language-yamldata-lang=yaml>version: "3.9"
</code></pre><p>Key configuration points in this Docker Compose file:<ul><li><strong>Privileged mode</strong> and <strong>device mappings</strong>: Required for accessing hardware (TPU, cameras)<li><strong>Shared memory size</strong>: Allocated for processing video frames efficiently<li><strong>Port mappings</strong>: Exposes Frigate's web UI (5000) and RTSP streams (8554)<li><strong>Volume mounts</strong>: Persists recordings, config, and database</ul><h3id=Frigate_Configuration_File>Frigate Configuration File</h3><p>Frigate requires a YAML configuration file to define cameras, detectors, and detection zones. Create this file at the path you specified in the docker-compose file (e.g., <code>/home/aron/frigate/config.yml</code>).<blockquoteclass=markdown-alert-note><p>This is necessary just once. Afterwards, you will be able to change the config in the GUI.</blockquote><p>Here's a working configuration using the Coral TPU:<preclass=language-yamldata-lang=yaml><codeclass=language-yamldata-lang=yaml>mqtt:
</code></pre><p>This configuration:<ul><li><strong>Disables MQTT</strong>: Simplifies setup for local-only operation<li><strong>Defines two detectors</strong>: A Coral TPU detector (<code>coral</code>) and a CPU fallback<li><strong>Uses default detection model</strong>: Frigate includes a pre-trained model<li><strong>Configures two cameras</strong>: Both set to 1280x720 resolution<li><strong>Uses hardware acceleration</strong>: <code>preset-rpi-64-h264</code> for Raspberry Pi 5<li><strong>Detection zones</strong>: Enable only when camera feeds are working properly</ul><h2id=MediaMTX_Setup>MediaMTX Setup</h2><p>MediaMTX is a real-time media server that handles streaming from the Raspberry Pi cameras to Frigate. It's necessary because Frigate doesn't directly support <code>libcamera</code> (the modern Raspberry Pi camera stack).<p>Install MediaMTX directly on the system (not via Docker - the Docker version has compatibility issues with libcamera).<blockquoteclass=markdown-alert-warning><p>Double-check the chip architecture when downloading - this caused me significant headaches during setup.</blockquote><p>Download and install MediaMTX:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>mkdir mediamtx
</code></pre><h3id=MediaMTX_Configuration>MediaMTX Configuration</h3><p>Edit the <code>mediamtx.yml</code> file to configure camera streams. The configuration below uses <code>rpicam-vid</code> (Raspberry Pi's modern camera tool) piped through FFmpeg to create RTSP streams.<p>Add the following to the <code>paths</code> section in <code>mediamtx.yml</code>:<preclass=language-yamldata-lang=yaml><codeclass=language-yamldata-lang=yaml>paths:
</code></pre><p>This configuration:<ul><li><strong><code>cam1</code> and <code>cam2</code></strong>: Define two camera paths<li><strong><code>rpicam-vid</code></strong>: Captures YUV420 video from Raspberry Pi cameras<li><strong><code>ffmpeg</code></strong>: Transcodes the raw video to H.264 RTSP stream<li><strong><code>runOnInitRestart: yes</code></strong>: Automatically restarts the stream if it fails</ul><h3id=Port_Configuration>Port Configuration</h3><p>Change the default RTSP port to avoid conflicts with Frigate:<p>In <code>mediamtx.yml</code>, change:<preclass=language-yamldata-lang=yaml><codeclass=language-yamldata-lang=yaml>rtspAddress: :8554
</code></pre><p>Otherwise there will be a port conflict with Frigate.<h3id=Start_MediaMTX>Start MediaMTX</h3><p>Run MediaMTX in the foreground to verify it's working:<preclass=language-zshdata-lang=zsh><codeclass=language-zshdata-lang=zsh>./mediamtx
</code></pre><p>If there are no errors, verify your streams using VLC or another RTSP client:<ul><li><code>rtsp://airaspi.local:8900/cam1</code><li><code>rtsp://airaspi.local:8900/cam2</code></ul><p>Note: Default RTSP port is 8554, but we changed it to 8900 in the config.<h2id=Current_Status_and_Performance>Current Status and Performance</h2><h3id="What's_Working">What's Working</h3><p>The system successfully streams from both cameras at 30fps and 720p resolution. The Coral Edge TPU performs object detection with minimal latency - the TPU itself is not breaking a sweat, maintaining consistently high performance.<p>According to Frigate documentation, the TPU can handle up to 10 cameras, so there's significant headroom for expansion.<h3id=Current_Issues>Current Issues</h3><p>However, there are several significant problems hampering the system:<p><strong>1. Frigate Display Limitations</strong><p>Frigate limits the display FPS to 5, which is depressing to watch, especially since the TPU doesn't even break a sweat. The hardware is clearly capable of much more, but software limitations hold it back.<p><strong>2. Stream Stability Problems</strong><p>The stream is completely errant and drops frames constantly. I've sometimes observed detect FPS as low as 0.2, but the TPU speed should definitely not be the bottleneck here. One potential solution might be to attach the cameras to a separate device and stream from there.<p><strong>3. Coral Software Abandonment</strong><p>The biggest issue is that Google seems to have abandoned the Coral ecosystem, even though they just released new hardware for it. Their most recent Python build supports only Python 3.9.<p>Specifically, <code>pycoral</code> appears to be the problem - without a decent update, I'm confined to Debian 10 with Python 3.7.3. That sucks. There are custom wheels available, but nothing that seems plug-and-play.<p>This severely limits the ability to use modern software and libraries with the system.<h2id=Reflections_and_Lessons_Learned>Reflections and Lessons Learned</h2><h3id=Hardware_Decisions>Hardware Decisions</h3><p><strong>The M.2 E Key Choice</strong><p>The decision to go for the M.2 E key version to save money, instead of spending more on the USB version, was a huge mistake. Please do yourself a favor and spend the extra 40 bucks.<p>Technically, it's probably faster and better for continuous operation, but I have yet to feel the benefit of that. The USB version would have offered far more flexibility and easier debugging.<h2id=Future_Development>Future Development</h2><p>Several improvements and experiments are planned to enhance this system:<p><strong>Documentation and Visual Aids</strong><ul><li>Add images and screenshots to this build log to make it easier to follow</ul><p><strong>Mobile Stream Integration</strong><ul><li>Check whether <ahref=https://vdo.ninja>vdo.ninja</a> is a viable way to add mobile streams, enabling smartphone camera integration and evaluation</ul><p><strong>MediaMTX libcamera Support</strong><ul><li>Reach out to the MediaMTX developers about bumping libcamera support, which would eliminate the current <code>rpicam-vid</code> workaround. I suspect there's quite a lot of performance lost in the current pipeline.</ul><p><strong>Frigate Configuration Refinement</strong><ul><li>Tweak the Frigate config to enable snapshots and potentially build an image/video database for training custom models later</ul><p><strong>Storage Expansion</strong><ul><li>Worry about attaching an external SSD and saving the video files on it for long-term storage and analysis</ul><p><strong>Data Export Capabilities</strong><ul><li>Find a way to export the landmark points from Frigate, potentially sending them via OSC (like in my <ahref=/project/pose2art/>pose2art</a> project) for creative applications</ul><p><strong>Dual TPU Access</strong><ul><li>Find a different HAT that lets me access the other TPU - I have the dual version, but can currently only access 1 of the 2 TPUs due to hardware restrictions</ul></article><hr><navid=post-nav><aclass="post-nav-itemp