This commit is contained in:
Aron Petau 2025-05-11 15:26:20 +02:00
parent 3bd579da3c
commit 4291b1bf48
327 changed files with 3334 additions and 191 deletions

View file

Before

Width:  |  Height:  |  Size: 212 KiB

After

Width:  |  Height:  |  Size: 212 KiB

Before After
Before After

View file

@ -30,10 +30,10 @@ banner = "prusa.jpg"
show_copyright = true
show_shares = true
+++
{% gallery() %}
```json
[
{
"file": "cloning_station.jpg",
@ -85,7 +85,7 @@ show_shares = true
"title": "A custom-built printer enclosure made up of 3 Ikea Lack tables and around 3 kgs of plastic."
}
]
{% end %}
```
## 3D Printing
@ -108,8 +108,6 @@ I built both of them from kits and heavily modified them. I control them via oct
Through it, I felt more at home using Linux, programming, soldering, incorporating electronics, and iteratively designing.
I love the abilities a 3D Printer gives me and plan on using it for the [recycling](/plastic-recycling/) project.
{{ gallery(name="gallery") }}
During the last half year, I also worked in a university context with 3D printers.
We conceptualized and established a "Digitallabor", an open space to enable all people to get into contact with innovative technologies. The idea was to create some form of Makerspace while emphasizing digital media.
The project is young, it started in August last year and so most of my tasks were in Workgroups, deciding on the type of machines and types of content such a project can provide value with.

View file

Before

Width:  |  Height:  |  Size: 328 KiB

After

Width:  |  Height:  |  Size: 328 KiB

Before After
Before After

Binary file not shown.

After

Width:  |  Height:  |  Size: 1,008 KiB

View file

@ -20,7 +20,7 @@ tags = [
"university of osnabrück"
]
[extra]
banner = "/images/ballpark_menu.png"
banner = "ballpark_menu.png"
show_copyright = true
show_shares = true

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View file

@ -0,0 +1,61 @@
+++
title = "Chatbot"
date = 2020-07-15
authors = ["Aron Petau"]
description = "A speech-controlled meditation assistant and sentiment tracker"
[taxonomies]
tags = [
"chatbot",
"data viz",
"google assistant",
"google cloud",
"google dialogflow",
"meditation",
"nlp",
"nlu",
"python",
"speech interface",
"sql",
"university of osnabrück",
"voice assistant",
"work"
]
[extra]
banner = "fulfillment-flow.png"
show_copyright = true
show_shares = true
+++
## Guru to Go: a speech-controlled meditation assistant and sentiment tracker
{{ youtube(id="R73vAH37TC0") }}
Here, you see a Demo video of a voice-controlled meditation assistant that we worked on in the course "Conversational Agents and speech interfaces"
<div class="buttons">
<a class="colored external" href="https://w3o.ikw.uni-osnabrueck.de/scheinmaker/export/details/76/67">Course Description</a>
</div>
The central goal of the entire project was to make the Assistant be entirely speech controlled, such that the phone needn't be touched while immersing yourself in meditation.
The Chatbot was built in Google Dialogflow, a natural language understanding engine that can interpret free text input and identify entities and intents within it,
We wrote a custom python backend to then use these evaluated intents and compute individualized responses.
The resulting application runs in Google Assistant and can adaptively deliver meditations, visualize sentiment history and comprehensively inform about meditation practices. Sadly, we used beta functionality from the older "Google Assistant" Framework, which got rebranded months after by Google into "Actions on Google" and changed core functionality requiring extensive migration that neither Chris, my partner in this project, nor I found time to do.
Nevertheless, the whole Chatbot functioned as a meditation player and was able to graph and store recorded sentiments over time for each user.
Attached below you can also find our final report with details on the programming and thought process.
<div class="buttons">
<a class="colored external" href="https://acrobat.adobe.com/link/track?uri=urn:aaid:scds:US:23118565-e24e-4586-b0e0-c0ef7550a067">Read the full report</a>
</div>
<div class="buttons">
<a class="colored external" href="https://github.com/cstenkamp/medibot_pythonbackend">Look at the Project on GitHub</a>
</div>
{% alert(note=true) %}
After this being my first dip into using the Google Framework for the creation of a speech assistant and encountering many problems along the way that partly found their way also into the final report, now I managed to utilize these explorations and am currently working to create [Ällei](/allei/), another chatbot with a different focus, which is not realized within Actions on google, but will rather be getting its own react app on a website.
{% end %}

View file

@ -0,0 +1,40 @@
+++
title = "Lusatia - an immersion in (De)Fences"
authors = ["Aron Petau"]
description = "A selection of images from the D+C Studio Class 2023"
banner = "/images/lusatia/lusatia_excavator.jpg"
date = 2023-07-27
[taxonomies]
tags = [
"agisoft metashape",
"barriers",
"borders",
"climate",
"coal",
"drone",
"energy",
"environment",
"exploitation",
"fences",
"lusatia",
"photogrammetry",
"studio d+c",
"tempelhofer feld",
"unity",
"university of the arts berlin"
]
[extra]
show_copyright = true
show_shares = true
+++
{{ youtube(id="kx6amt2jY7U") }}
On an Excursion to Lusatia, a project with the Working Title (De)Fences was born.
Here are the current materials.
<iframe width="100%" height="1024" frameborder="0" allow="xr-spatial-tracking; gyroscope; accelerometer" allowfullscreen scrolling="no" src="https://kuula.co/share/collection/7F22J?logo=1&info=1&fs=1&vr=0&zoom=1&autop=5&autopalt=1&thumbs=3&alpha=0.60"></iframe>
TODO: upload unity project

View file

@ -0,0 +1,57 @@
+++
title = "Stable Dreamfusion"
description = "An exploration of 3D mesh generation through AI"
date = 2023-06-20
authors = ["Aron Petau"]
banner = "/images/dreamfusion/sd_pig.png"
[taxonomies]
tags = [
"3D graphics",
"TODO, unfinished",
"ai",
"dreamfusion",
"generative",
"mesh",
"studio d+c",
"university of the arts berlin"
]
[extra]
show_copyright = true
show_shares = true
+++
## Stable Dreamfusion
<div class="sketchfab-embed-wrapper"> <iframe title="Stable-Dreamfusion Pig" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/0af6d95988e44c73a693c45e1db44cad/embed?ui_theme=dark&dnt=1"> </iframe> </div>
## Sources
I forked a really popular implementation that reverse engineered the Google Dreamfusion algorithm. This algorithm is closed-source and not publicly available.
The implementation I forked is [here](https://github.com/arontaupe/stable-dreamfusion)
This one is running on stable-diffusion as a bas process, which means we are are expected to have worse results than google.
The original implementation is [here](https://dreamfusion3d.github.io)
{{ youtube(id="shW_Jh728yg") }}
## Gradio
The reason i forked the code is so that i could implement my own gradio interface for the algorithm. Gradio is a great tool for quickly building interfaces for machine learning models. No code involves, any user can state their wish, and the mechanism will spit out a ready-to-be-rigged model (obj file)
## Mixamo
I used Mixamo to rig the model. It is a great tool for rigging and animating models. But before everything, it is simple. as long as you have a model with a decent humanoid shape in something of a t-pose, you can rig it in seconds. Thats exactly what i did here.
## Unity
I used Unity to render the model to the magic leap 1.
Through this, i could create an interactive and immersive environment with the generated models.
The dream was, to build a AI- Chamber of wishes.
You pick up the glasses, state your desires and then the algorithm will present to you an almost-real object in AR.
Due to not having access to the proprietary sources from google and the beefy, but still not quite machine-learning ready computers we have at the studio, the results are not quite as good as i hoped.
But still, the results are quite interesting and i am happy with the outcome.
A single generated object in the Box takes roughly 20 minutes to generate.
Even then, the algorithm is quite particular and oftentimes will not generate anything coherent at all.

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 69 KiB

View file

@ -0,0 +1,44 @@
+++
title = "Auraglow"
description = "Das Wesen der Dinge - Perspectives on Design"
date = 2023-03-01
authors = ["Aron Petau", "Sebastian Paintner", "Milli Keil"]
banner = "cage_closeup.jpeg"
[taxonomies]
tags = [
"aruco",
"ar",
"aura",
"feng shui",
"hand recognition",
"image recognition",
"journal",
"light tracking",
"magic leap",
"particle systems",
"relations",
"studio d+c",
"unity",
"university of the arts berlin"
]
[extra]
show_copyright = true
show_shares = true
+++
![The AR set that we used](cage_closeup_2.jpeg)
What makes a room?\
How do moods and atmospheres emerge?\
Can we visualize them to make the experiences visible?
The project "The Nature of Objects" aims to expand (augment) perception by making the moods of places tangible through the respective auras of the objects in the space.\
What makes objects subjects?\
How can we make the implicit explicit?\
And how can we make the character of a place visible?\
Here, we question the conservative, purely physical concept of space and address in the project a temporal, historical component of space, its objects, and their past.
Space will have transformed: from a simple "object on which interest, thought, action is directed" (definition object Duden), to a "creature that is endowed with consciousness, thinking, sensing, acting" (definition subject Duden).
This metamorphosis of subject formation on objects enables the space to undergo changes influenced, or, more precisely a shaping, reshaping, deformation -such that the space can finally be perceived differently and multiangular.
[See the Project on GitHub](https://github.com/arontaupe/auraglow){: .btn .btn--large}

View file

@ -0,0 +1,66 @@
+++
title = "Ruminations"
description = "Perspectives on Engineering"
date = 2023-03-01
authors = ["Aron Petau", "Niels Gercama"]
banner = "ruminations1.jpeg"
[taxonomies]
tags = [
"amazon",
"browser fingerprinting",
"capitalism",
"computer vision",
"consumerism",
"data",
"data privacy",
"image classifier",
"journal",
"javascript",
"pattern recognition",
"privacy",
"studio d+c",
"TODO, unfinished",
"university of the arts berlin"
]
[extra]
show_copyright = true
show_shares = true
+++
## Ruminations
was a contemplation on data privacy at Amazon.
It asks how to subvert browser fingerprinting and evading the omnipresent tracking of the consumer.
The initial idea was to somehow, by interacting with the perpetrator and letting data accumulate that would degrade their knowledge and thereby destroy predictablity, making this particular dataset worth less.
We could have just added a random clickbot, to confuse things a bit and make the data less valuable.
But looking at todays state of datacleanup algorithms and the sheer amount of data that is collected, this would have been a futile attempt. Amazon just detects and removes any noise we add and continues to use the data.
So, then, how can we create coherent, non-random data that is still not predictable?
One answer that this concept should demonstrate, is by inserting patterns that amazon cannot foresee with their current algorithms. As if they were trying to predict the actions of a person with shizophrenia.
## The Concept
It consists of a browser extension (currently Chrome only) that overlays all web pages of Amazon with a moving entity that tracks your behavior. While tracking, an image classifier algorithm is used to formulate a product query off of the Storefront. After computation, a perfectly fitting product is displayed for your consumer's pleasure.
## The analogue watchdog
A second part of the project is a low-tech installation consisting of a camera (we used a smartphone) running a computer-vision algorithm tracking tiny movements. This was then pointed towards the browser console in the laptop running the extension. The camera was then connected to a screen that displayed the captured image. The watchdog was trained to make robot noises depending on the type and amount of movement detected. Effectively, whenever data traffic beween amazon and the browser was detected, the watchdog would start making noises.
# The Browser extension
gallery:
![The project installation](ruminations1.jpeg)
![The project installation](ruminations2.jpeg)
![The project installation](ruminations3.jpeg)
### Find the code on GitHub
Subvert a bit yourself, or just have a look at the code.
[The code of the Project on GitHub](https://github.com/arontaupe/ruminations)
TODO: create video with live demo

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 398 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 680 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 944 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 853 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 839 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 768 KiB

View file

@ -0,0 +1,60 @@
+++
title = "Autoimmunitaet"
description = "A playful interactive experience to reflect on the societal value of the car"
date = 2023-06-20
authors = ["Aron Petau", "Milli Keil", "Marla Gaiser"]
[taxonomies]
tags = [
"3D printing",
"action figure",
"aufstandlastgen",
"cars",
"interactive",
"last generation",
"studio d+c",
"suv",
"university of the arts berlin"
]
[extra]
banner = "autoimmunitaet-1.jpg"
show_copyright = true
show_shares = true
+++
## How do we design our Commute?
In the context of the Design and Computation Studio Course [Milli Keil](https://millikeil.eu), [Marla Gaiser](https://marlagaiser.de) and me developed a concept for a playful critique of the traffic decisions we take and the idols we embrace.\
It should open up questions of whether the generations to come should still grow up playing on traffic carpets that are mostly grey and whether the [Letzte Generation](https://letztegeneration.org), a political climate activist group in Germany receives enough recognition for their acts.
A call for solidarity.
![The action figures](/assets/images/autoimmunitaet/autoimmunitaet-2.jpg)
{: .center}
## The scan results
<div class="sketchfab-embed-wrapper"> <iframe title="Autoimmunitaet: Letzte Generation Actionfigure" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/3916ba600ef540d0a874506bf61726f2/embed?ui_hint=0&ui_theme=dark&dnt=1"> </iframe> </div>
## The Action Figure, ready for printing
<div class="sketchfab-embed-wrapper"> <iframe title="Autoimmunitaet: Letzte Generation Action Figure" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/deec1b2899af424c91f85cbf35952375/embed?ui_theme=dark&dnt=1"> </iframe> </div>
## Autoimmunitaet
Autoimmunity is a term for defects, that are produced by a dysfunctional self-tolerance of a system.\
This dysfunction causes the immune system to stop accepting certain parts of itself and build antibodies instead.\
An invitation for a speculative playful interaction.
![Our action figures in action](autoimmunitaet-1.jpg)
![Our action figures in action](autoimmunitaet-3.jpg)
![Our action figures in action](autoimmunitaet-5.jpg)
![Our action figures in action](autoimmunitaet-6.jpg)
![Our action figures in action](autoimmunitaet-7.jpg)
![Our action figures in action](autoimmunitaet-8.jpg)
## The Process
The figurines are 3D Scans of ourselves, in various typical poses of the Letzte Generation.\
We used photogrammetry to create the scans, which is a technique that uses a lot of photos of an object to create a 3D model of it.\
We used the app [Polycam](https://polycam.ai) to create the scans using IPads and their inbuilt Lidar scanners.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

View file

@ -0,0 +1,53 @@
+++
title = "Dreams of Cars"
description = "A subversive urban intervention"
date = 2023-06-20
authors = ["Aron Petau"]
[taxonomies]
tags = [
"ads",
"cars",
"dreams",
"greenscreen",
"imaginaries",
"lightroom",
"photoshop",
"photography",
"studio d+c",
"suv",
"university of the arts berlin",
"urban intervention"
]
[extra]
banner = "suv_door-1.jpg"
show_copyright = true
show_shares = true
+++
## Photography
In the context of the course "Fotografie Elementar" with Sebastian Herold I developed a small concept of urban intervention.\
The results were exhibited at the UdK Rundgang 2023 and are also visible here.
![The gallery piece](suv_door-1.jpg)
## Dreams of Cars
These are not just cars.\
They are Sport Utility Vehicles.\
What might they have had as hopes and dreams on the production line?\
Do they dream of drifting in dusty deserts?\
Climbing steep rocky canyon roads?\
Sliding down sun-drenched dunes?\
Discovering remote pathways in natural grasslands?\
Nevertheless, they did end up in the parking spots here in Berlin.
What drove them here?
![Dreams of Cars 1](Dreams_of_Cars-1.jpg)
![Dreams of Cars 2](Dreams_of_Cars-2.jpg)
![Dreams of Cars 3](Dreams_of_Cars-3.jpg)
![Dreams of Cars 4](Dreams_of_Cars-4.jpg)
![Dreams of Cars 5](Dreams_of_Cars-5.jpg)
![Dreams of Cars 6](Dreams_of_Cars-6.jpg)
![Dreams of Cars 7](Dreams_of_Cars-7.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 874 KiB

View file

@ -0,0 +1,467 @@
+++
title = "AIRASPI Build Log"
authors = ["Aron Petau"]
description = "Utilizing an edge TPU to build an edge device for image recognition and object detection"
date = 2024-01-30
[taxonomies]
tags = [
"coral",
"docker",
"edge TPU",
"edge computing",
"frigate",
"local AI",
"private",
"raspberry pi",
"surveillance"
]
[extra]
show_copyright = true
show_shares = true
+++
## AI-Raspi Build Log
This should document the rough steps to recreate airaspi as I go along.
Rough Idea: Build an edge device with image recognition and object detection capabilites.\
It should be realtime, aiming for 30fps at 720p.\
Portability and usage at installations is a priority, so it has to function without active internet connection and be as small as possible.\
It would be a real Edge Device, with no computation happening in the cloud.
Inspo from: [pose2art](https://github.com/MauiJerry/Pose2Art)
## Hardware
- [Raspberry Pi 5](https://www.raspberrypi.com/products/raspberry-pi-5/)
- [Raspberry Pi Camera Module v1.3](https://www.raspberrypi.com/documentation/accessories/camera.html)
- [Raspberry Pi GlobalShutter Camera](https://www.raspberrypi.com/documentation/accessories/camera.html)
- 2x CSI FPC Cable (needs one compact side to fit pi 5)
- [Pineberry AI Hat (m.2 E key)](https://pineberrypi.com/products/hat-ai-for-raspberry-pi-5)
- [Coral Dual Edge TPU (m.2 E key)](https://www.coral.ai/products/m2-accelerator-dual-edgetpu)
- Raspi Official 5A Power Supply
- Raspi active cooler
## Setup
### Most important sources used
[coral.ai](https://www.coral.ai/docs/m2/get-started/#requirements)
[Jeff Geerling](https://www.jeffgeerling.com/blog/2023/pcie-coral-tpu-finally-works-on-raspberry-pi-5)
[Frigate NVR](https://docs.frigate.video)
### Raspberry Pi OS
I used the Raspberry Pi Imager to flash the latest Raspberry Pi OS Lite to a SD Card.
Needs to be Debian Bookworm.\
Needs to be the full arm64 image (with desktop), otherwise you will get into camera driver hell.
{: .notice}
Settings applied:
- used the default arm64 image (with desktop)
- enable custom settings:
- enable ssh
- set wifi country
- set wifi ssid and password
- set locale
- set hostname: airaspi
### update
This is always good practice on a fresh install. It takes quite long with the full os image.
```zsh
sudo apt update && sudo apt upgrade -y && sudo reboot
```
### prep system for coral
Thanks again @Jeff Geerling, this is completely out of my comfort zone, I rely on people writing solid tutorials like this one.
```zsh
# check kernel version
uname -a
```
```zsh
# modify config.txt
sudo nano /boot/firmware/config.txt
```
While in the file, add the following lines:
```config
kernel=kernel8.img
dtparam=pciex1
dtparam=pciex1_gen=2
```
Save and reboot:
```zsh
sudo reboot
```
```zsh
# check kernel version again
uname -a
```
- should be different now, with a -v8 at the end
edit /boot/firmware/cmdline.txt
```zsh
sudo nano /boot/firmware/cmdline.txt
```
- add pcie_aspm=off before rootwait
```zsh
sudo reboot
```
### change device tree
#### wrong device tree
The script simply did not work for me.
maybe this script is the issue?
i will try again without it
{: .notice}
```zsh
curl https://gist.githubusercontent.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e/raw/32d21f73bd1ebb33854c2b059e94abe7767c3d7e/coral-ai-pcie-edge-tpu-raspberrypi-5-setup | sh
```
- Yes it was the issue, wrote a comment about it on the gist
[comment](https://gist.github.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e?permalink_comment_id=4860232#gistcomment-4860232)
What to do instead?
Here, I followed Jeff Geerling down to the T. Please refer to his tutorial for more information.
In the meantime the Script got updated and it is now recommended again.
{: .notice}
```zsh
# Back up the current dtb
sudo cp /boot/firmware/bcm2712-rpi-5-b.dtb /boot/firmware/bcm2712-rpi-5-b.dtb.bak
# Decompile the current dtb (ignore warnings)
dtc -I dtb -O dts /boot/firmware/bcm2712-rpi-5-b.dtb -o ~/test.dts
# Edit the file
nano ~/test.dts
# Change the line: msi-parent = <0x2f>; (under `pcie@110000`)
# To: msi-parent = <0x66>;
# Then save the file.
# Recompile the dtb and move it back to the firmware directory
dtc -I dts -O dtb ~/test.dts -o ~/test.dtb
sudo mv ~/test.dtb /boot/firmware/bcm2712-rpi-5-b.dtb
```
Note: msi- parent sems to carry the value <0x2c> nowadays, cost me a few hours.
{: .notice}
### install apex driver
following instructions from [coral.ai](https://coral.ai/docs/m2/get-started#2a-on-linux)
```zsh
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gasket-dkms libedgetpu1-std
sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
sudo groupadd apex
sudo adduser $USER apex
```
Verify with
```zsh
lspci -nn | grep 089a
```
- should display the connected tpu
```zsh
sudo reboot
```
confirm with, if the output is not /dev/apex_0, something went wrong
```zsh
ls /dev/apex_0
```
### Docker
Install docker, use the official instructions for debian.
```zsh
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
```
```zsh
# add user to docker group
sudo groupadd docker
sudo usermod -aG docker $USER
```
Probably a source with source .bashrc would be enough, but I rebooted anyways
{: .notice}
```zsh
sudo reboot
```
```zsh
# verify with
docker run hello-world
```
### set docker to start on boot
```zsh
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
```
### Test the edge tpu
```zsh
mkdir coraltest
cd coraltest
sudo nano Dockerfile
```
Into the new file, paste:
```Dockerfile
FROM debian:10
WORKDIR /home
ENV HOME /home
RUN cd ~
RUN apt-get update
RUN apt-get install -y git nano python3-pip python-dev pkg-config wget usbutils curl
RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" \
| tee /etc/apt/sources.list.d/coral-edgetpu.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install -y edgetpu-examples
```
```zsh
# build the docker container
docker build -t "coral" .
```
```zsh
# run the docker container
docker run -it --device /dev/apex_0:/dev/apex_0 coral /bin/bash
```
```zsh
# run an inference example from within the container
python3 /usr/share/edgetpu/examples/classify_image.py --model /usr/share/edgetpu/examples/models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --label /usr/share/edgetpu/examples/models/inat_bird_labels.txt --image /usr/share/edgetpu/examples/images/bird.bmp
```
Here, you should see the inference results from the edge tpu with some confidence values.\
If it ain't so, safest bet is a clean restart
### Portainer
This is optional, gives you a browser gui for your various docker containers
{: .notice}
Install portainer
```zsh
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
open portainer in browser and set admin password
- should be available under <https://airaspi.local:9443>
### vnc in raspi-config
optional, useful to test your cameras on your headless device.
You could of course also attach a monitor, but i find this more convenient.
{: .notice}
```zsh
sudo raspi-config
```
-- interface otions, enable vnc
### connect through vnc viewer
Install vnc viewer on mac.\
Use airaspi.local:5900 as address.
### working docker-compose for frigate
Start this as a custom template in portainer.
Important: you need to change the paths to your own paths
{: .notice}
```yaml
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "64mb" # update for your cameras based on calculation above
devices:
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
volumes:
- /etc/localtime:/etc/localtime:ro
- /home/aron/frigate/config.yml:/config/config.yml # replace with your config file
- /home/aron/frigate/storage:/media/frigate # replace with your storage directory
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "******"
```
### Working frigate config file
Frigate wants this file wherever you specified earlier that it will be.\
This is necessary just once. Afterwards, you will be able to change the config in the gui.
{: .notice}
```yaml
mqtt:
enabled: False
detectors:
cpu1:
type: cpu
num_threads: 3
coral_pci:
type: edgetpu
device: pci
cameras:
cam1: # <++++++ Name the camera
ffmpeg:
hwaccel_args: preset-rpi-64-h264
inputs:
- path: rtsp://192.168.1.58:8900/cam1
roles:
- detect
cam2: # <++++++ Name the camera
ffmpeg:
hwaccel_args: preset-rpi-64-h264
inputs:
- path: rtsp://192.168.1.58:8900/cam2
roles:
- detect
detect:
enabled: True # <+++- disable detection until you have a working camera feed
width: 1280 # <+++- update for your camera's resolution
height: 720 # <+++- update for your camera's resolution
```
### mediamtx
install mediamtx, do not use the docker version, it will be painful
double check the chip architecture here, caused me some headache
{: .notice}
```zsh
mkdir mediamtx
cd mediamtx
wget https://github.com/bluenviron/mediamtx/releases/download/v1.5.0/mediamtx_v1.5.0_linux_arm64v8.tar.gz
tar xzvf mediamtx_v1.5.0_linux_arm64v8.tar.gz && rm mediamtx_v1.5.0_linux_arm64v8.tar.gz
```
edit the mediamtx.yml file
### working paths section in mediamtx.yml
```yaml
paths:
cam1:
runOnInit: bash -c 'rpicam-vid -t 0 --camera 0 --nopreview --codec yuv420 --width 1280 --height 720 --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH'
runOnInitRestart: yes
cam2:
runOnInit: bash -c 'rpicam-vid -t 0 --camera 1 --nopreview --codec yuv420 --width 1280 --height 720 --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH'
runOnInitRestart: yes
```
also change rtspAddress: :8554\
to rtspAddress: :8900\
Otherwise there is a conflict with frigate.
With this, you should be able to start mediamtx.
```zsh
./mediamtx
```
If there is no error, you can verify your stream through vlc under rtsp://airaspi.local:8900/cam1 (default would be 8554, but we changed it in the config file)
### Current Status
I get working streams from both cameras, sending them out at 30fps at 720p.
frigate, however limits the display fps to 5, which is depressing to watch, especially since the tpu doesnt even break a little sweat.
Frigate claime that the TPU is good for up to 10 cameras, so there is headroom.
The stram is completely errant and drops frames left and right. I have sometimes seen detect fps of 0.2, but the TPU speed should definitely not be the bottleneck here. Maybe attach the cameras to a separate device and stream from there?
The biggest issue here is that the google folx seems to have abandoned the coral, even though they just released a new piece of hardware for it.
Their most RECENT python build is 3.9.
Specifically, pycoral seems to be the problem there. without a decent update, I will be confined to debian 10, with python 3.7.3.
That sucks.
There are custom wheels, but nothing that seems plug and play.
About the rest of this setup:
The decision to go for m.2 E key to save money, instead of spending more on the usb version was a huge mistake.
Please do yourself a favor and spend the extra 40 bucks.
Technically, its probably faster and better with continuous operation, but i have yet to feel the benefit of that.
### TODOs
- add images and screenshots to the build log
- Check whether vdo.ninja is a viable way to add mobile streams. then Smartphone stream evaluation would be on the horizon.
- Bother the mediamtx makers about the libcamera bump, so we can get rid of the rpicam-vid hack.
I suspect there is quirte a lot of performance lost there.
- tweak the frigate config to get snapshots and maybe build an image / video database to later train a custom model.
- worry about attaching an external ssd and saving the video files on it.
- find a way to export the landmark points from frigate. maybe send them via osc like in pose2art?
- find a different hat that lets me access the other TPU? I have the dual version, but can currently only acces 1 of the 2 TPUs due to hardware restrictions.

View file

@ -0,0 +1,44 @@
+++
title = "Local Diffusion"
excerpt = "Empower your own Stable Diffusion Generation: InKüLe supported student workshop: Local Diffusion by Aron Petau"
date = 2024-04-11
authors = ["Aron Petau"]
banner = "images/local-diffusion/local-diffusion.png"
[taxonomies]
tags = [
"automatic1111",
"comfyui",
"diffusionbee",
"inküle",
"Local Computing",
"Stable Diffusion",
"Workshop",
"university of the arts berlin"
]
[extra]
show_copyright = true
show_shares = true
+++
## Local Diffusion
[The official call for the Workshop](https://www.udk-berlin.de/universitaet/online-lehre-an-der-universitaet-der-kuenste-berlin/inkuele/11-april-24-aron-stable-diffusion/)
Is it possible to create a graphic novel with generative A.I.?
What does it mean to use these emerging media in collaboration with others?
And why does their local and offline application matter?
With AI becoming more and more democratised and GPT-like Structures increasingly integrated into everyday life, the black-box notion of the mysterious all-powerful Intelligence hinders insightful and effective usage of emerging tools. One particularly hands-on example is AI generated images. Within the proposed Workshop, we will dive into Explainable AI, explore Stable Diffusion, and most importantly, understand the most important parameters within it. We want to steer outcomes in a deliberate manner. Emphasis here is on open and accessible technology, to increase user agency and make techno-social dependencies and power relations visible.
Empower yourself against readymade technology!
Do not let others decide on what your best practices are. Get involved in the modification of the algorithm and get surprised by endless creative possibilities. Through creating a short graphic novel with 4-8 panels, participants will be able to utilise multiple flavours of the Stable Diffusion algorithm, and will have a non-mathematical understanding of the parameters and their effects on the output within some common GUIs. They will be able to apply several post-processing techniques to their generated images, such as upscaling, masking, inpainting and pose redrawing. Further, participants will be able to understand the structure of a good text prompt, be able to utilise online reference databases and manipulate parameters and directives of the Image to optimise desired qualities. Participants will also be introduced to ControlNet, enabling them to direct Pose and Image composition in detail.
## Workshop Evaluation
Over the course of 3 hours, I gave an introductory workshop in local stable diffusion processing and introduced participants to the server available to UdK Students for fast remote computation that circumvents the unethicality of continuously using a proprietary cloud service for similar outputs. There is not much we can do on the data production side and many ethical dilemmas surrounding digital colonialism remain, but local computation takes one step towards a critical and transparent use of AI tools by Artists.
The Workshop format was rathert open and experimental, which was welcomed by the participants and they tried the collages enthusiastically. We also had a refreshing discussion on different positions regarding the ethicalities and whether a complete block of these tools is called for and feasible.
I am looking forward to round 2 with the next iteration, where we are definitely diving deeper into the depths of comfyui, an interface that i absolutely adore, while its power also terrifies me sometimes.

View file

@ -0,0 +1,59 @@
+++
title = "Sferics"
description = "On a hunt for the Voice of the out there"
date = 2024-06-20
authors = ["Aron Petau"]
[taxonomies]
tags = [
"antenna",
"electronics",
"magnetism",
"fm",
"geosensing",
"lightning",
"radio",
"sferics",
"university of the arts berlin"
]
[extra]
show_copyright = true
show_shares = true
+++
## What the hell are Sferics?
>A radio atmospheric signal or sferic (sometimes also spelled "spheric") is a broadband electromagnetic impulse that occurs as a result of natural atmospheric lightning discharges. Sferics may propagate from their lightning source without major attenuation in the Earthionosphere waveguide, and can be received thousands of kilometres from their source.
- [Wikipedia](https://en.wikipedia.org/wiki/Radio_atmospheric_signal)
## Why catch them?
[Microsferics](microsferics.com) is a nice reference Project, which is a network of Sferics antennas, which are used to detect lightning strikes. Through triangulation not unlike the Maths happening in GPS, the (more or less) exact location of the strike can be determined. This is useful for weather prediction, but also for the detection of forest fires, which are often caused by lightning strikes.
Because the Frequency of the Sferics is, when converted to audio, still in the audible range, it is possible to listen to the strikes. This usually sounds a bit like a crackling noise, but can also be quite melodic. I was a bit reminded by a Geiger Counter.
Sferics are in the VLF (Very Low Frequency) range, sitting roughly at 10kHz, which is a bit of a problem for most radios, as they are not designed to pick up such low frequencies. This is why we built our own antenna.
At 10kHz, we are talking about insanely large waves. a single wavelength there is roughly 30 Kilometers. This is why the antenna needs to be quite large. A special property of waves this large is, that they get easily reflected by the Ionosphere and the Earth's surface. Effectively, a wave like this can bounce around the globe several times before it is absorbed by the ground. This is why we can pick up Sferics from all over the world and even listen to Australian Lightning strikes. Of course, without the maths, we cannot attribute directions, but the so called "Tweeks" we picked up, usually come from at least 2000km distance.
## The Build
We built several so-called "Long-Loop" antennas, which are essentially a coil of wire with a capacitor at the end. Further, a specific balun is needed, depending on the length of the wire. this can then directly output an electric signal on an XLR cable.
Loosely based on instructions from [Calvin R. Graf](https://archive.org/details/exploringlightra00graf), We built a 26m long antenna, looped several times around a wooden frame.
## The Result
We have several hour-long recordings of the Sferics, which we are currently investigating for further potential.
Have a listen to a recording of the Sferics here:
{{ youtube(id="2YYPg_K3dI4") }}
As you can hear, there is quite a bit of 60 hz ground buzz in the recording.
This is either due to the fact that the antenna was not properly grounded or we simply were still too close to the bustling city.
I think it is already surprising that we got such a clear impression so close to Berlin. Let's see what we can get in the countryside!
![Listening at night](sferics1.jpeg)
![The Drachenberg](sferics2.jpeg)
![The Antenna](sferics3.jpeg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 999 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

View file

@ -0,0 +1,54 @@
+++
title = "Käsewerkstatt"
description = "Building a Food trailer and selling my first Food"
date = 2024-07-05
authors = ["Aron Petau"]
banner = "cheese.jpeg"
[taxonomies]
tags = [
"bruschetta",
"cars",
"food truck",
"raclette",
"workshop"
]
[extra]
show_copyright = true
show_shares = true
+++
## Enter the Käsewerkstatt
One day earlier this year I woke up and realized I had a space problem.
I was trying to build out a workshop and tackle ever more advanced and dusty plastic and woodworking projects and after another small run in with my girlfriend after I had repeatedly crossed the "No-Sanding-and-Linseed-Oiling-Policy" in our Living Room, it was time to do something about it.
I am based in Berlin right now and the housing market is going completely haywire over here ( quick shoutout in solidarity with [Deutsche Wohnen und Co enteignen](https://dwenteignen.de/)).
End of the song: I won't be able to afford to rent a small workshop anywhere near berlin anytime soon. As you will notice in some other projects, I am quite opposed to the Idea that it should be considered normal to park ones car in the middle of the city on public spaces, for example [Autoimmunitaet](/autoimmunitaet), [Commoning Cars](/commoning-cars) or [Dreams of Cars](/dreams-of-cars).
So, the idea was born, to regain that space as habitable zone, taking back usable space from parked cars.
I was gonna install a mobile workshop within a trailer.
Ideally, the trailer should be lockable and have enough standing and working space.
As it turns out, Food Trailers fulfill these criteria quite nicely. So I got out on a quest, finding the cheapest food trailer available in germany.
6 weeks later, I found it near munich, got it and started immediately renovating it.
Due to developments in parallel, I was already invited to sell food and have the ofgficial premiere at the Bergfest, a Weekend Format in Brandenburg an der Havel, initiated and organized by [Zirkus Creativo](https://zirkus-creativo.de). Many thanks for the invitation here again!
So on it went, I spent some afternoons renovating and outfitting the trailer, and did my first ever shopping at Metro, a local B2B Foodstuffs Market.
Meanwhile, I got into all the paperwork and did all the necessary instructional courses and certificates.
The first food I wanted to sell was Raclette on fresh bread, a swiss dish that is quite popular in germany.
For the future, the trailer is supposed to tend more towards vegan dishes, as a first tryout I also sold a bruschetta combo. This turned out great, since the weather was quite hot and the bruschetta was a nice and light snack, while I could use the same type of bread for the raclette.
![The finished Trailer](/assets/images/käsewerkstatt/trailer.jpeg)
The event itself was great, and, in part at least, started paying off the trailer.
Some photos of the opeing event @ Bergfest in Brandenburg an der Havel
![Scraping the cheese](cheese.jpeg)
![The Recommended Combo from the Käsewerkstatt](combo_serve.jpeg)
![The Logo of the Käsewerkstatt, done with the Shaper Origin](logo.jpeg)
We encountered lots of positive feedback and I am looking forward to the next event. So, in case you want to have a foodtruck at your event, hit me up!
Contact me at: [käsewerkstatt@petau.net](mailto:käsewerkstatt@petau.net)

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

View file

@ -30,7 +30,7 @@ tags = [
]
[extra]
banner = "/images/masterthesis/puzzle.jpeg"
banner = "puzzle.jpeg"
show_copyright = true
show_shares = true
+++

Binary file not shown.

After

Width:  |  Height:  |  Size: 283 KiB