Clean up broken duckquill submodule references

This commit is contained in:
Aron Petau 2025-04-30 17:47:32 +02:00
parent 84c80eceaa
commit 0d597798c8
322 changed files with 30223 additions and 4137 deletions

View file

@ -0,0 +1,127 @@
+++
title = 3D printing
date : 2022-03-01 14:39:27 +0100
author: Aron Petau
header:
teaser: /assets/images/prusa.jpg
video:
id : Yj_Pc357kEU
provider: youtube
credit : Aron Petau
excerpt: My 3D Printing journey and the societal implications of the technology
gallery:
- url: /assets/images/cloning_station.jpg
image_path: /assets/images/cloning_station.jpg
title = "A plant propagation station now preparing our tomatoes for summer"
- url: /assets/images/elk.jpg
image_path: /assets/images/elk.jpg
alt: "elk"
title = "We use this to determine the flatmate of the month"
- url: /assets/images/dragon_skull_1.jpg
image_path: /assets/images/dragon_skull_1.jpg
alt: "dragon skull"
title = "A dragon's head that was later treated to glow in the dark."
- url: /assets/images/ender2.jpg
image_path: /assets/images/ender2.jpg
alt: "ender 2"
title = "This was my entry into a new world, the now 10 years old Ender 2"
- url: /assets/images/lithophane.jpg
image_path: /assets/images/lithophane.jpg
alt: "lithophane of my Grandparents"
title = "I have made lots of lithophanes, a process where the composition and thickness of the material are used for creating an image."
- url: /assets/images/prusa.jpg
image_path: /assets/images/prusa.jpg
title = "This is my second printer, a Prusa i3 MK3s."
- url: /assets/images/vulva_candle.jpg
image_path: /assets/images/vulva_candle.jpg
alt: "vulva on a candle"
title = "This candle is the result of a 3D printed plastic mold that I then poured wax into."
- url: /assets/images/pinecil.jpg
image_path: /assets/images/pinecil.jpg
alt: "pinecil"
title = "An enclosure for my portable soldering iron"
- url: /assets/images/lamp.jpg
image_path: /assets/images/lamp.jpg
alt: "a lamp design"
title = "A lamp screen design that particularly fascinated me, it effortlessly comes from a simple 2D spiral shape."
- url: /assets/images/prusa_enclosure.jpg
image_path: /assets/images/prusa_enclosure.jpg
alt: "Prusa enclosure"
title = "A custom-built printer enclosure made up of 3 Ikea Lack tables and around 3 kgs of plastic."
tags:
- accessibility
- creality
- decentral
- democratic
- engineering
- experiment
- gcode
- octoprint
- parametric design
- plastics
- prusa
- slicing
- private
- work
- additive manufacturing
- 3D printing
- university of osnabrück
created: 2023-07-26T23:59:03+02:00
last_modified_at: 2023-10-01T20:14:34+02:00
+++
## 3D Printing
### 3D Printing is more than just a hobby for me
In it, I see societal changes, the democratization of production, and creative possibilities. Plastic does not have to be one of our greatest environmental problems if we just choose to change our perspective and behavior toward it.
Plastic Injection molding was one major driving force for the capitalist setting we are in now.
3D Printing can be utilized to counteract the production of scale.
Today, the buzzword 3D Printing is already associated with problematic societal practices, it is related to "automatization" and "on-demand economy". The technology has many aspects to be considered and evaluated and as a technology, many awesome things happen through it and on the same page it fuels developments I would consider problematic. Due to a history of patents influencing the development of the technology, and avid adoption of companies hoping to optimize production processes and margins, but also a very active hobbyist community, all sorts of projects are realized. While certainly societally explosive, there is still a lot going for 3D Printing.
3D Printing means local and custom production. While I do not buy the whole “every household is going to have a machine that prints what they need right now at the press of a button”, I do see vast potential in 3D Printing.
Thats why I want to build my future on it.
I want to design things and make them become reality.
A 3D Printer lets me control that process from start to finish. Being able to design a thing in CAD is not enough here, I also need to be able to fully understand and control the machine that makes my thing.
I started using a 3D Printer in early 2018, and by now I have two of them and they mostly do what I tell them to do.
I built both of them from kits and heavily modified them. I control them via octoprint, a software that, with its open and helpful community, makes me proud to use it and taught me a lot about open-source principles. 3D Printing in the hobbyist space is a positive example where a method informs my design and I love all the areas it introduced me to.
Through it, I felt more at home using Linux, programming, soldering, incorporating electronics, and iteratively designing.
I love the abilities a 3D Printer gives me and plan on using it for the [recycling](/plastic-recycling/) project.
{% include gallery caption="Some projects from my printer." %}
During the last half year, I also worked in a university context with 3D printers.
We conceptualized and established a "Digitallabor", an open space to enable all people to get into contact with innovative technologies. The idea was to create some form of Makerspace while emphasizing digital media.
The project is young, it started in August last year and so most of my tasks were in Workgroups, deciding on the type of machines and types of content such a project can provide value with.
Read more about it on the Website:
[DigiLab Osnabrück](https://digitale-lehre.virtuos.uni-osnabrueck.de/uos-digilab/)
Looking forward, I am also incredibly interested in going beyond polymers for printing. I would love to be able to be more experimental concerning the material choices, something rather hard to achieve while staying in a shared student flat. There have been great projects with ceramics and printing, which I certainly want to have a deeper look into. One project I want to highlight is the evolving cups which impressed me a lot.
[Evolving Objects](https://evolving-objects.nl)
This group from the Netherlands is algorithmically generating shapes of cups and then printing them on a paste extruder with clay.
The process used is described more here:
The artist [Tom Dijkstra](http://tomdijkstra.info) is developing a paste extruder that can be attached to modify a conventional Printer and I would very much love to develop my version and experiment with printing new and old materials in such a concept printer.
[Printing with Ceramics](https://wikifactory.com/+Ceramic3DPrinting/forum/thread/NDQyNDc0)
[The Paste Extruder](http://tomdijkstra.info/dirtmod/index.php)
Also with regards to the [recycling](/plastic-recycling/) project, it might make sense for me to incorporate multiple machines into one and let the printer itself directly handle pellet- or paste-form. I am looking forward to expanding my horizon there and seeing what is possible.
Cups and Tableware are of course just one sample area where a backtrack toward traditional materials within modern manufacturing could make sense. There is also more and more talk of 3D Printed Clay- or Earth homes, an area where [WASP](https://www.3dwasp.com/en/3d-printing-architecture/) is a company I look up to.
They built several concept buildings and structures from locally mixed earth, creating some awesome environmentally conscious structures.
Adhering to principles of local building with locally available materials and taking into account the infamous emission problem within the building industry has several great advantages.
And since such alternative solutions are unlikely to come from the industry itself, one major avenue to explore and pursue these solutions are art projects and public demonstrations.
I want to explore all these areas and look at how manufacturing and sustainability can converge and create lasting solutions for society.
Also, 3D Printing is directly tied to the plans for my master's thesis, since everything I manage to reclaim, will somehow have to end up being something again. Why not print away our waste?
Now, after a few years of tinkering, modifying and upgrading, I find that I did not change my current setup for over a year. It simply works and I am happy with it. Since my first beginner's printer, the failure rates are negligible and I have had to print really complex parts in order to generate enough waste for the [recycling project](/plastic-recycling/).
Gradually, the mechanical system of the printer shifted from an object of care to simply a tool that I use. In the last years, hardware, but especially software has matured to a point where, at least to me, it tends to be a set-and-forget situation. On to actually making my parts and designs. Read more about that in the post about [CAD](/cad/)

118
drafts/2018-07-05-cad.md Normal file
View file

@ -0,0 +1,118 @@
+++
title: 3D Modeling and CAD
date : 2022-03-01 14:39:27 +0100
author: Aron Petau
header:
teaser: /assets/images/render_bike_holder.png
overlay_image : assets/images/render_bike_holder.png
overlay_filter : 0.2
credit : Aron Petau
excerpt: Modelling and Scanning in 3D using Fusion360, Sketchfab, and Photogrammetry
gallery:
- url: /assets/images/breast_candle.jpg
image_path: /assets/images/breast_candle.jpg
alt: "breast-candle"
title: "A candle made of a 3D scan, found on https://hiddenbeauty.ch/"
- url: /assets/images/vulva_candle.jpg
image_path: /assets/images/vulva_candle.jpg
alt: " vulva_candle"
title: "A candle created with a 3D printed mold made in Fusion360"
tags:
- sketchfab
- fusion360
- functional design
- design for printing
- private
- photogrammetry
- scaniverse
- virtual reality
- 3D printing
- polycam
- parametric modelling
- university of osnabrück
- work
created: 2023-07-26T23:59:12+02:00
last_modified_at: 2023-10-01T20:14:46+02:00
+++
## 3D Modeling and CAD
### Designing 3D Objects
While learning about 3D Printing, I was most intrigued by the possibility to modify and repair existing products. While there is an amazing community with lots of good and free models around, naturally I came to a point where I did not find what I was looking for readily designed. I realized this is an essential skill for effectively operating not just 3D Printers, but any productive machine really.
Since youtube was the place I was learning all about 3D Printing, and all the people that I looked up to there were using Fusion 360 as their CAD Program thats what I got into.
In hindsight, that was a pretty good choice and I am in love with the abilities parametric design gives me.
Below you will find some of my designs.
The process is something that I enjoy a lot and wish to dive into deeper.
By trial and error, I already learned a lot about designing specifically for 3D Printing, but I often feel that there are many aesthetic considerations in design that I am not familiar with.
I want to broaden my general ability to design physical objects, which is something I hope to gain during my masters.
<iframe src="https://myhub.autodesk360.com/ue2cf184b/shares/public/SH9285eQTcf875d3c539feb2bfae6da3d872?mode=embed" width="100%" height="600" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true" frameborder="0"></iframe>
<iframe src="https://myhub.autodesk360.com/ue2cf184b/shares/public/SH9285eQTcf875d3c53974bf27fea6ee1a20?mode=embed" width="100%" height="600" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true" frameborder="0"></iframe>
<iframe src="https://myhub.autodesk360.com/ue2cf184b/shares/public/SH9285eQTcf875d3c539ed795f9645d8b981?mode=embed" width="100%" height="600" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true" frameborder="0"></iframe>
<iframe src="https://myhub.autodesk360.com/ue2cf184b/shares/public/SH9285eQTcf875d3c539bc7225ced67e5e92?mode=embed" width="100%" height="600" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true" frameborder="0"></iframe>
<iframe src="https://myhub.autodesk360.com/ue2cf184b/shares/public/SH9285eQTcf875d3c5397f64c69f2093b1b5?mode=embed" width="100%" height="600" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true" frameborder="0"></iframe>
<iframe src="https://myhub.autodesk360.com/ue2cf184b/shares/public/SH9285eQTcf875d3c539e8166aea2f430aed?mode=embed" width="100%" height="600" allowfullscreen="true" webkitallowfullscreen="true" mozallowfullscreen="true" frameborder="0"></iframe>
{% include gallery caption="Here are some of my models in the real world" %}
Check out more of my finished designs in the Prusaprinters (now Printables) Community
[My Printables profile](https://www.printables.com/social/97957-arontaupe/models
){: .btn .btn--large}
## 3D Scanning and Photogrammetry
Besides coming up with new objects, incorporating the real world is also an interest of mine.
### Interaction with real objects and environments
In the last few years I have played around with a few smartphone cameras and was always quite sad, that my scans were never quite accurate enough to do cool stuff with them. I could not really afford real 3D scanner and had already started cobbling together a raspberry Pi camera with a cheap TOF sensor, which is a simple, but not quite as good replacement for a laser or a lidar sensor, but then Apple came out with the first phones with accessible Lidar sensor.
Recently, through work at the university I got access to a device with a lidar sensor and started having fun with it.
See some examples here:
<div class="sketchfab-embed-wrapper"> <iframe title="DigiLab Main Room" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/c880892c6b4746bc80717be1f81bf169/embed?ui_theme=dark&dnt=1"> </iframe> </div>
<div class="sketchfab-embed-wrapper"> <iframe title="VR Room DigiLab" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/144b63002d004fb8ab478316e573da2e/embed?ui_theme=dark&dnt=1"> </iframe> </div>
This last one was scanned with just my smartphone camera. You can see that the quality is notably worse, but considering is was created with just a single, run-of-the-mill smartphone sensor, I think it is still pretty impressive and will certainly do something towards democratizing such technologies and abilities.
<div class="sketchfab-embed-wrapper"> <iframe title="Digitallabor UOS" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/2f5cff5b08d243f2b2ceb94d788b9cd6/embed?ui_theme=dark&dnt=1"> </iframe> </div>
## Perspective
What this section is supposed to deliver is the message that I am currently not where I want to be navigating the vast possibilities of CAD. I feel confident enough to approach small repairs around the flat with a new perspective, but I still lack technical expertise when approaching a collection composite parts, having to function together. I still have lots of projects halfdone or half-thought and one major reason is that there is no real critical exchange within my field of study.
I want more than designing figurines or wearables.
I want to incorporate 3D printing as a method to extend the abilities of other tools, have mechanical and electrical purposes, be foodsafe and engaging.
I fell in love with the idea of designing a toy system, inspired by [Makeways on Kickstarter](https://www.kickstarter.com/projects/makeway/makeway-create-intricate-courses-watch-your-marbles-soar), I have already started adding my own parts to their set.
I dream of my very own 3D printed coffeecup, one that is both foodsafe and dishwasher-surviving. For that, I would have to do quite a bit of material research, but that just makes the idea so much more appealing.
I would love finding a material composition incorporating waste to stop relying on plastics, or at least on fossile plastics.
Once in Berlin, I would want to talk to the people at [Kaffeform](https://www.kaffeeform.com/en/) producing largely compostable Coffee Cups incorporating a significant amount of old ground espresso, albeit using injection molding for their process.
The industry selling composite filaments is much more conservative with the percentage of non-plastic additives, because with a nozzle extrusion process there is much more to go wrong.
Still, I would love to explore that avenue further and think there is a lot to be gained from looking at pellet printers.
I also credit huge parts of my exploration process into local recycling to the awesome people at [Precious Plastic](https://preciousplastic.com), who I will join over the summer to learn more about their system.
I find it hard to write anything about CAD without connecting it directly to a manufacturing process.
And I believe that's a good thing. Always tying a design process to its realization, grounds the process and attaches to it some immediacy.
For me to become more confident in this process, I am still missing more expertise in organic shapes, so I would be happy to dig more into Blender, an awesome tool that in my mind is far too powerful to dive into it with just youtube lessons.
## Software that I have used and like
[AliceVision Meshroom](https://alicevision.org/#meshroom){: .btn .btn--large}
[Scaniverse](https://scaniverse.com/){: .btn .btn--large}
[My Sketchfab Profile](https://sketchfab.com/arontaupe){: .btn .btn--large}
[3D Live Scanner for Android](https://play.google.com/store/apps/details?id=com.lvonasek.arcore3dscanner&hl=en&gl=US){: .btn .btn--large}

126
drafts/2018-09-01-beacon.md Normal file
View file

@ -0,0 +1,126 @@
+++
title: BEACON
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: Decentralizing the Energy Grid in inaccessible and remote regions
header:
teaser: /assets/images/india_key_monastery.jpg
overlay_image : assets/images/india_key_monastery.jpg
overlay_filter : 0.2
credit : Aron Petau
tags:
- decentral
- democratic
- python
- engineering
- data viz
- electricity
- solar
- data collection
- simulation
- grid
- scaling
- himalaya
- india
- research
- energy
- university of osnabrück
- iit kharagpur
created: 2023-07-26T23:59:30+02:00
last_modified_at: 2023-10-01T20:14:56+02:00
+++
## BEACON: Decentralizing the Energy Grid in inaccessible and remote regions
Access to Electricity is a basic human right. At first, that may seem over the top, but if one stops to think what all the little tasks that electricity can indirectly handle for us (lightning, laundry, cooking, freezing, heating, entertaining…) would consume in time and effort if we had to perform them manually, this idea becomes very clear. There are globally around 1 billion people without tier 2 access to electricity.
[SDGS Goal 7](https://sdgs.un.org/goals/goal7)
![The electricity tiers defined by the UN](/assets/images/electricity_tiers.png)
People only know the intensity of labor that goes into everything when there is no electricity. And it is not even only about convenience, electricity is an enormous lifesaver in any number of scenarios, think just of hospitals or mobile phone networks that would be rendered completely useless without it. So we can easily agree on a need, a demand for electricity globally, for every person. But what about the supply? Why is there 1 billion undersupplied?
The Answer: missing profitability. It would be a charity project to supply every last person on earth, not a profitable one. And while charitable projects are noble and should be pursued, the reality within capitalism shows that this is not the way it is going to happen.
But what if we could come up with technology, or rather, a communal structure, that enables us to supply profitably, and still adapt to both, the difficult external factors (weather issues, remoteness, altitude, etc.) and the smaller purses of the undersupplied?
### Location
Towards the end of 2018, I spent 4 months in northern India, on a research project with the IIT Kharagpur.
The goal was to work on one of the 17 UN-defined sustainable development goals electricity.
Worldwide, an estimated 1 billion people have no or insubstantial access to the grid.
Some of them live here, in the Key Monastery in the Spiti Valley at around 3500 meters altitude.
![key monastery](/assets/images/india_key_monastery.jpg)
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d843.1304298825468!2d78.01154047393467!3d32.2978346!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x3906a673e168749b%3A0xf011101a0f02588b!2sKey%20Gompa%20(Key%20Monastery)!5e0!3m2!1sen!2sde!4v1647009764190!5m2!1sen!2sde" width="500" height="500" style="border:0;" allowfullscreen="true" loading="lazy"></iframe>
![tashi gang](/assets/images/tashi_gang.jpg)
This is Tashi Gang, a village close to the Monastery. It houses around 50 people and only has road access during 3-4 months in the summer. For the rest of the time, the people rely on first aid services by helicopter, which can only be called with a working cell phone tower.
<iframe src="https://www.google.com/maps/embed?pb=!1m18!1m12!1m3!1d3389.4081271053687!2d78.67430271521093!3d31.841107638419718!2m3!1f0!2f0!3f0!3m2!1i1024!2i768!4f13.1!3m3!1m2!1s0x3907aaa3ac472219%3A0x5c4b39e454beed3c!2sTashigang%20172112!5e0!3m2!1sen!2sde!4v1647009910307!5m2!1sen!2sde" width="500" height="500" style="border:0;" allowfullscreen="true" loading="lazy"></iframe>
## The Project
In an environment reliant on hydro-energy and solar (diesel transport is unreliable due to snowed-in mountain roads), over 6 months of snowy winter, frequent snowstorms, and temperatures of up to -35°C, securing the grid is hard.
Our way to tackle the issue was to reject the in the western society very established notion of electricity as a homogenous product with centralized production and instead researched the possibilities of a predictive, self-correcting, and decentral grid.
By prioritizing energy usage cases, instead of a full blackout during a storm, essential functions like radio towers and hospitals could be partially powered and maybe stay functioning. The binarity of either having electricity or not would be replaced by assigned quantities and timeslots, in a collective effort to be mindful and distribute the electricity necessity-based.
The ultimate vision was a live predictive electricity market, where people could even earn money by selling their allotted, but not needed electricity.
To gauge feasibility, I conducted several psychological acceptance studies and collected data on local electricity demands.
I simulated a typical day of electricity demand in the Key monastery and the surrounding villages and mapped out the potential to install cost-efficient smart microgrid controllers enabling such an accurate and predictive behavior.
The smart grid operator boxes available here in Germany cost several hundred, with installation several thousand Euros, not a feasible solution for the Indian population. Instead, we wanted to use Raspberry Pi's, which are interconnected through ethernet cables or local mesh networking.
## Research
![The Electricity layout of the Key Monastery](/assets/images/Key_Monastery_Spiti.png)
## Data Collection
Building a questionnaire and visiting public schools during their English Classes, I had the chance to speak to a range of teenagers, answering questions about the state of electricity in their homes, generating more data than I could have accomplished running from door to door without any skills speaking local dialects. The questionnaire was as scientific as I could make it in such a situation and geared towards finding the type and number of electric devices in the homes and estimating typical usage scenarios.
With a total of 145 participants from more than 6 different schools and roughly 4 different districts, all located in the Indian part of the Himalayas, the findings are as follows:
The participants range from 11 to 53 years, with an average of 17 years.
The average household has 6 members with an average of 5 smart devices. Only 2 percent of the Households had not a single smart device, but at the same time, only 42 percent had direct or indirect access to a laptop or computer. So the main body of smart devices consists of smartphones with a negligible portion of tablets.
The average total amount of electrical devices is around 11 electrical appliances per house.
**Subjective** Quality Rating on a scale of 1 to 10:
> Average quality in summer: 7.1
> Average quality in monsoon: 5.6
> Average quality in autumn: 7.1
> Average quality in winter: 4.0
So, as you would expect, during winter, but also when it rains, the felt quality drops by more than 30 percent on average.
As for the daily supply time, the average sits at 15.1 hours out of 24, meaning the people have electricity only for 62.9 percent of the time, some, as for example the people in Diskit only have a sad 4 hours of daily access. On top of that, this estimation does not account for the snowfalls in Spiti for example, where it is not uncommon to experience 3 consecutive days of powercut or more.
As the Power Meter is supplied by the government, a solid 82 percent of the houses have a working power meter, if one assumes that the 13 percent who did not know whether they have a power meter, do have one, we can say that around 95% of the houses have a power meter.
Another goal of the studies was to find out what would incline people to be caring and sharing with the available electricity, something rather unimaginable here in Germany.
In general, the uninformed openness to delaying usage of electricity on a scale of 1-10 was around 5.5, with the additional information that a smart delay would cause an overall price reduction, the acceptance went up to 6.9, a good 14%. This implies that people would be a lot more inclined to give up conveniences if the benefits have a direct impact on them.
## Simulation
After collecting all the estimated electric appliances of the local population, I simulated the use of 200 Solar Panels with 300Wp each, once for simultaneous electricity use, and once for mitigated electricity peaks through smart optimization and electricity usage delay.
![SAM Simulation of a local solar system ](/assets/images/sam_sim.png)
![SAM Simulation Optimized](/assets/images/sam_sim_opt.png)
Although solar is definitely not the optimal choice here and generates lots of issues with energy storage and battery charging at negative degrees, we figured that this was the way to go for the project.
And as you can see, optimizing peak usage can improve solar from generating only one-fifth of the demand in winter to about half the demand in winter. Keeping in mind here, that the added solar farm was only intended to supply additional energy and not replace existing solutions, such a "small" farm would be a real lifesaver there and optimize the limited space in extremely mountainous terrain.
## Closing words
There are to sides which the problems can be tackled: we can bring the total energy production up, by adding more panels or electricity by other means, but we can also try and bring the total demand down. This is to be achieved by investing strictly in the most energy-efficient appliances. Even replacing older, not-so-efficient appliances might sometimes be of use.
But ensuring efficient use is not the only way to bring down the overall demand.
As introduced as core ideas for the whole project, sharing and delaying will prove immensely useful. How so?
By sharing, we mean a concept that is already widely applied in the relevant areas. What to do in a Village that has no access to water? Will we send each household out to the faraway river to catch water for their family? Or would we join hands in a community effort to dig a central well used by everyone?
So, when we look at sharing electricity, how would we apply the concept? We take the appliances that consume the most energy individually and scale them up in order to increase efficiency. For example, in our case, that is most applicable to electric heating. If we manage to heat central community spaces available for everyone, naturally, fewer individual rooms will have to be heated. Similarly, one could declare a room as a public cinema, where people come together and watch Tv on a big Projector. Twice as fun, and conserving a great deal of energy again. Such ideas and others have to be realized in order to be able to match the total demand with the available supply.
Sadly, the project was never taken up further, and the situation for the people in the Spiti Valley has not improved. Two years ago, a road directly through the mountains was finished, making the population hopeful for an increase in tourism, increasing the chances of the economic viability of improved solutions.
I spent my time there in the function of a research intern, having no real say in the realization of the project. The problem remains, and I still think that decentral solutions look to be the most promising for this specific location. Of course, the Himalayas present a bit of an extreme location, but that doesn't change the fact that people live there and have a basic human right to electricity.

View file

@ -0,0 +1,108 @@
+++
title: Plastic Recycling
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: A recycling system inspired by Precious Plastic, Filastruder, and Machine Learning
header:
teaser: /assets/images/recycling_graphic.jpg
overlay_image: assets/images/recycling_graphic.jpg
credit: Aron Petau
tags:
- decentral
- democratic
- precious plastic
- plastics
- recycling
- circular
- cradle-to-cradle
- automatic
- ml
- Arduino
- Linux
- filastruder
- private
- master thesis
- waste
- sustainability
- environment
- 3D printing
- filament
- shredder
created: 2023-07-26T23:59:37+02:00
last_modified_at: 2023-10-01T20:15:10+02:00
+++
Being involved with 3D Printers, there is the issue of sustainability that I am confronted with regularly.
Most 3D printed parts never get recycled and add to the global waste problem, rather than reducing it.
The printer most certainly doesnt care what it is printing, the main problem is the dimensional accuracy and the purity of the material. All of this leads to a huge industry, Germany being especially involved, using loads of virgin plastic.
What can be done about it?
We can design our products to last longer, we can also print recycling labels on them so they do not have to get burned after their first life. We can take care to only print functional objects, not just fun toys nobody uses.
Yet, none of that prevents the use of virgin plastics. If you buy a spool of filament, there are some recycled options, but usually at twice the price at worse quality. No wonder recycled filament fails to convince the masses. It is mostly a fun thing YouTubers can pursue, not a valid commercial process.
{% include video id="vqWwUx8l_Io" provider="youtube" %}
In my opinion, the core problem is the nonexistent economic feasibility of a proper recycling process. Identifying the exact material of a piece of trash is a very hard problem, definitely not solved yet. So why do we mix the plastic up in the first place? There is a general willingness of people to recycle, but the system for it is missing.
# The Master Plan
I want to get people to wash and separate their trash for me, which are the most expensive steps in the recycling process. There is a willingness to take the extra step, and even if just my mom collects bottle caps for me, that is more than I can realistically use up.
This only really works when I am thinking in a local and decentral environment.
The existing recycling facilities clearly will not be able to provide 200 different containers for 200 different types of plastic.
Starting the process with clean and sorted materials, like bottle caps (HDPE) or failed prints (PET-G), I start off with an advantage.
Now I have to take apart the trash into evenly sized particles.
Meet:
## The Shredder
We built the Precious Plastic Shredder!
<iframe width="510" height="682" src="https://b2b.partcommunity.com/community/partcloud/embedded.html?route=embedded&name=Shredder+Basic+V2.0&model_id=96649&portal=b2b&showDescription=true&showLicense=false&showDownloadButton=false&showHotspots=true&noAutoload=false&autoRotate=true&hideMenu=false&topColor=%23dde7ed&bottomColor=%23ffffff&cameraParams=false&varsettransfer=" frameborder="0" id="EmbeddedView-Iframe-96649" allowfullscreen></iframe>
With these awesome open-source drawings, I was able to cobble together my very own very dangerous plastic shredder.
After finding some way to drive this massive axis, I feed the beast and hopefully get tiny pretty uniform plastic bits that are ready to begin the cycle of life anew.
The solution for the motorization was an old and used garden shredder that still had an intact motor and wiring.
We cut it in half and attached it to the shredder box.
{% include video id="QwVp1zmAA4Q" provider="youtube" %}
After replacing the weak force transmission screw for an industrial coupler, we were ready to try it out. Obviously, there are still security concerns in this prototype, a proper hopper is already being made.
Nevertheless, we are confident that this shredder will be able to deal with the light sorts of plastic we are thinking of.
As you can see, I am now able to produce awesome confetti but to do more with the plastic flakes I have to extrude them.
## Meet the Filastruder
This is the Filastruder, designed and made by Tim Elmore, in an attempt to create the cheapest viable way to extrude plastic. The biggest cost issue is the tight industrial tolerances in thickness that have to be adhered to. This is in essence what separates good from the bad filament. The industry standard nowadays is at +-0.03mm. Hard to achieve on a DIY setup, but not unheard of. The setup, like any bigger industry equivalent, consists of a motor pressing plastic pellets through a heated screw, extruding molten plastic at the end through a nozzle, and setting the diameter. The leftmost machine is responsible for winding the filament properly onto a spool.
Here you can see the extrusion process in action.
{% include video id="FX6--pYrPVs" provider="youtube" %}
The Filastruder is controlled by an Arduino and is highly configurable. The laser sensor visible in the video is already working, but I am missing more direct control over the diameter of the filament.
When it all really comes down to the single variable of the filament diameter responsible for the quality of my recycled project, a simple Machine Learning optimization directly jumps at me: I have a few variables like winder speed, extrusion speed, heat, and cooling intensity. These variables can be optimized on the fly for an exact diameter. This is actually roughly how virgin filament is produced, commercial facilities just manage much faster.
![The variables in an iterative optimization](/assets/images/recycling_variables.png)
So far, I am aware of a few companies and academic projects attempting this process, but none of them manage to get either the quality or the price of other products available. Automatization does not just take out jobs away, I think it can also be a helpful tool, for example tackling environmental issues such as this one.
This project is very dear to my heart and I plan to investigate it further in the form of a master thesis.
The realization will require many skills I am already picking up or still need to work on within the Design and Computation program.
{: .notice--info}
[Reflow Filament](https://reflowfilament.com/){: .btn .btn--large}
[Perpetual Plastic Project](https://www.perpetualplasticproject.com/){: .btn .btn--large}
[Precious Plastic Community](https://preciousplastic.com/){: .btn .btn--large}
[Filamentive Statement on why recycling is not feasible in their opinion](https://www.filamentive.com/recycling-failed-and-waste-3d-prints-into-filament-challenges/
){: .btn .btn--large}
[Open source filament diameter sensor by Tomas Sanladerer](https://www.youmagine.com/designs/infidel-inline-filament-diameter-estimator-lowcost-10-24){: .btn .btn--large}
[Re-Pet Shop](https://re-pet3d.com/s){: .btn .btn--large}

View file

@ -0,0 +1,45 @@
+++
title: Ballpark
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: A 3D Game Concept in Unity
header:
teaser: /assets/images/ballpark_menu.png
overlay_image: assets/images/ballpark_menu.png
overlay_filter: 0.5
credit: Aron Petau
tags:
- 3D graphics
- Unity
- C#
- cyberpunk
- collaborative
- game
- physics
- communication
- 1st person
- 3rd person
- 2 player
- university of osnabrück
created: 2023-07-26T23:59:42+02:00
last_modified_at: 2023-10-01T20:15:16+02:00
+++
## Ballpark: 3D Environments in Unity
Implemented in Unity, Ballpark is a Concept work for a collaborative 2-Player Game, where one player is a navigator with a third-person perspective and another player is a copilot, responsible for interaction with the environment featuring mostly working physics, intelligent enemies, a gun, a grappling hook system for traversing the map, a 2D Interface for navigation and a health bar system. On top of the meanest cyberpunk vibes my past self was able to conjure.
Enjoy!
{% include video id="jwQWd9NPEIs" provider="youtube" %}
As you can see, the design faces some questionable choices, but all mechanics are homemade from the ground up and I learned a lot. I often struggle to enjoy competitive games and think there is potential in a co-dependent game interface. During early testing, we often found that it enforces player communication since already the tutorial is quite hard to beat.
Due to me being a leftie, perhaps not entirely smart, I gave player one the keyboard arrows to work with and player two the WASD keys and left and right mouse buttons for grappling and shooting. For the game, it has an interesting side effect, in that players are forced not only to interact through the differing information on each player's screen but also have to physically interact and coordinate the controls.
As you can perhaps see, the ball-rolling navigation is quite hard to use.
It is a purely physics-based system, where, depending on the materiality of the ball, its weight, and therefore its inertia will drastically change.
On small screens, the prototype version of the game is virtually impossible to control and several visual bugs within the viewport still obfuscate items when they are too close. Considering that virtually all the mechanics are written from scratch, with a follow-me camera, collision detection, smart moving agents, and a still very wonky-looking grappling gun, I still think it deserves a spot in this portfolio.
For this project I focused completely on the mechanics of the game, resulting in lots of used prefabs and readymade 3D Objects. Next time, I want to do that myself too.
I enjoyed my stint into Unity a lot and am looking forward to creating my first VR application and would love to try out some form of mechanics where the user vision is completely obfuscated by VR and they have to carry their eyes as a handheld connected camera so that the players can move around the camera itself with their hands.

View file

@ -0,0 +1,79 @@
+++
title: Coding Examples
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: A selection of coding projects from my Bachelor's in Cognitive Science
header:
teaser: /assets/images/sample_lr.png
overlay_image : assets/images/sample_lr.png
overlay_filter : 0.2
credit : Aron Petau
gallery:
- url: /assets/images/sample_lr.png
image_path: /assets/images/sample_lr.png
title: "A low-resolution sample"
- url: /assets/images/sample_hr.png
image_path: /assets/images/sample_hr.png
alt: ""
title: "A high-resolution sample. This is also called 'ground truth' "
- url: /assets/images/sample_sr.png
image_path: /assets/images/sample_sr.png
alt: " "
title: "The artificially enlarged image patch resulting from the algorithm"
- url: /assets/images/sample_loss.png
image_path: /assets/images/sample_loss.png
alt: ""
title: "A graph showing an exemplary loss function applied during training"
- url: /assets/images/sample_cos_sim.png
image_path: /assets/images/sample_cos_sim.png
alt: ""
title: "One qualitative measurement we used was pixel-wise cosine similarity. It is used to measure how similar the output and the ground truth images are"
tags:
- ethics
- computer vision
- neural nets
- face detection
- object recognition
- GOFAI
- super resolution
- jupyter notebook
- google colab
- python
- tensorflow
- keras
- machine learning
- AI
- MTCNN
- CNN
- university of osnabrück
created: 2023-07-26T23:59:59+02:00
last_modified_at: 2023-10-01T20:15:26+02:00
+++
## Neural Networks and Computer Vision
## A selection of coding projects
Although pure coding and debugging are often not a passion of mine, I recognize the importance of neural networks and other recent developments in Computer Vision. From several projects regarding AI and Machine Learning that I co-authored during my Bachelor Program, I picked this one since I think it is well documented and explains on a step-by-step basis what we do there.
### Image Super-Resolution using Convolutional Neural Networks (Recreation of a 2016 Paper)
Image Super-Resolution is a hugely important topic in Computer Vision. If it works sufficiently advanced, we could take all our screenshots and selfies and cat pictures from the 2006 facebook-era and even from before and scale them up to suit modern 4K needs.
Just to give an example of what is possible in 2020, just 4 years after the paper here, have a look at this video from 1902:
{% include video id="EQs5VxNPhzk" provider="youtube" %}
The 2016 paper we had a look at is much more modest: it tries to upscale only a single Image, but historically, it was one of the first to achieve computing times sufficiently small to make such realtime-video-upscaling as visible in the Video (from 2020) or of the likes that Nvidia uses nowadays to upscale Videogames.
{% include gallery caption="Example of a Super-Resolution Image. The Neural network is artificially adding Pixels so that we can finally put our measly selfie on a billboard poster and not be appalled by our deformed-and-pixelated-through-technology face." %}
[The Python notebook for Image super-resolution in Colab]( https://colab.research.google.com/drive/1RlgIKJmX8Omz9CTktX7cdIV_BwarUFpv?usp=sharing){: .btn .btn--large}
### MTCNN (Application and Comparison of a 2016 Paper)
Here, you can also have a look at another, much smaller project, where we rebuilt a rather classical Machine learning approach for face detection. Here, we use preexisting libraries to demonstrate the difference in efficacy of approaches, showing that Multi-task Cascaded Convolutional Networks (MTCNN) was one of the best-performing approaches in 2016. Since I invested much more love and work into the above project, I would prefer for you to check that one out, in case two projects are too much.
[Face detection using a classical AI Approach (Recreation of a 2016 Paper)](https://colab.research.google.com/drive/1uNGsVZ0Q42JRNa3BuI4W-JNJHaXD26bu?usp=sharing){: .btn .btn--large}

View file

@ -0,0 +1,66 @@
+++
title: Homebrew
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: A bubbly hobby of mine
header:
teaser: /assets/images/beer_tap.jpg
overlay_image : assets/images/beer_tap.jpg
overlay_filter : 0.2
credit : Aron Petau
tags:
- experiment
- beer
- homebrew
- private
- lager
- altbier
- hops
- keg
- fermentation
- pressure
- yeast
- sustainability
gallery:
- url: /assets/images/beer_setup.jpg
image_path: /assets/images/beer_setup.jpg
title: "The latest iteration of my homebrew setup, using pressure tanks and a pressurized fermentation chamber"
- url: /assets/images/beer_setup_2.jpg
image_path: /assets/images/beer_setup_2.jpg
title: "An electric kettle I use for the Brew"
- url: /assets/images/beer_tap.jpg
image_path: /assets/images/beer_tap.jpg
title: "I made my own kegging system featuring a tap from an old table leg."
- url: /assets/images/beer_fermentation.jpg
image_path: /assets/images/beer_fermentation.jpg
title: "An active fermentation"
- url: /assets/images/hops.jpg
image_path: /assets/images/hops.jpg
title: "Hops growing in our garden, so I can experiment with fresh specialty hops"
- url: /assets/images/beer_malt.jpg
image_path: /assets/images/beer_malt.jpg
title: "The leftover mass of spent grain.
Animals love it, it's great for composting,
but most importantly, its great for baking bread!"
created: 2023-07-27T00:00:07+02:00
last_modified_at: 2023-10-01T20:15:40+02:00
+++
## Brewing
### Making my own beer
I love hosting, I love experimenting in the Kitchen. Starting with homebrews was a natural fit for me and during the first wave of Covid-19, I went the whole homebrewers route of bottle fermentation and small batches later elevating my game with larger batches of 50 liters and a pressure tank system.
Starting out, I found it fascinating, how just 4 rather simple ingredients, malt, hops, water and yeast, can form such an incredible range of taste experiences. It was and still is, a tremendous learning experience, where one slowly has to accept not being able to control the process fully and find room for creativity.
Why do I present such an unrelated non-academic hobby here? I simply do not regard it as unrelated, experimenting and optimizing a process and a workflow, creating optimal conditions for the yeast to do its job feels very similar to approaching a coding project.
Yeast and what it does fascinates me. Every time I open the latch to release some pressure on the Tank I think of the awesome symbiotic relationships yeast has with humans and how many different strains live there together to create a unique, yet tailored flavor. Several ideas are floating around of changing the brewing process by capturing the created carbon dioxide and using it productively. I could see a car tire being filled with my beer gas, or an algae farm munching away on my CO2 byproducts. Within a closed-loop pressurized system, such ideas actually become realizable and I would love to explore them further.
I am not yet an expert on algae, but I can manage with yeast and I believe they can coexist and create a more sustainable cycle of production.
Young Henrys, a brewery in Australia is already incorporating algae into its industrial process:
[The Algae project](https://younghenrys.com/algae){: .btn .btn--large}
Such ideas do not come into the industry by themselves: I believe that art and the exploratory discovery of novel techniques are the same things. Good and inventive design can improve society and make steps towards sustainability. I want to be part of that and would love to find new ways of using yeast in other design contexts: See whether I can make them work in a closed circular system, make them calculate things for me, or simply making my next beer taste awesome with just the right amount of fizz.
{% include gallery caption="Some selected photos of the process in our Kitchen" %}

View file

@ -0,0 +1,54 @@
+++
title: Chatbot
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: A speech-controlled meditation assistant and sentiment tracker
header:
overlay_image: "https://cloud.google.com/dialogflow/es/docs/images/fulfillment-flow.svg"
teaser: "https://cloud.google.com/dialogflow/es/docs/images/fulfillment-flow.svg"
credit: "Google Dialogflow Documentation"
tags:
- nlp
- nlu
- google assistant
- python
- speech interface
- data viz
- work
- sql
- voice assistant
- meditation
- chatbot
- google dialogflow
- google cloud
- university of osnabrück
created: 2023-07-27T00:00:19+02:00
last_modified_at: 2023-10-01T20:15:51+02:00
+++
## Guru to Go: a speech-controlled meditation assistant and sentiment tracker
{% include video id="R73vAH37TC0" provider="youtube" %}
Here, you see a Demo video of a voice-controlled meditation assistant that we worked on in the course "Conversational Agents and speech interfaces"
[Course Description](https://w3o.ikw.uni-osnabrueck.de/scheinmaker/export/details/76/
){: .btn .btn--large}
The central goal of the entire project was to make the Assistant be entirely speech controlled, such that the phone needn't be touched while immersing yourself in meditation.
The Chatbot was built in Google Dialogflow, a natural language understanding engine that can interpret free text input and identify entities and intents within it,
We wrote a custom python backend to then use these evaluated intents and compute individualized responses.
The resulting application runs in Google Assistant and can adaptively deliver meditations, visualize sentiment history and comprehensively inform about meditation practices. Sadly, we used beta functionality from the older "Google Assistant" Framework, which got rebranded months after by Google into "Actions on Google" and changed core functionality requiring extensive migration that neither Chris, my partner in this project, nor I found time to do.
Nevertheless, the whole Chatbot functioned as a meditation player and was able to graph and store recorded sentiments over time for each user.
Attached below you can also find our final report with details on the programming and thought process.
[Read the full report](https://acrobat.adobe.com/link/track?uri=urn:aaid:scds:US:23118565-e24e-4586-b0e0-c0ef7550a067
){: .btn .btn--large}
[Look at the Project on GitHub](https://github.com/cstenkamp/medibot_pythonbackend
){: .btn .btn--large}
After this being my first dip into using the Google Framework for the creation of a speech assistant and encountering many problems along the way that partly found their way also into the final report, now I managed to utilize these explorations and am currently working to create [Ällei](/allei/), another chatbot with a different focus, which is not realized within Actions on google, but will rather be getting its own react app on a website.
{: .notice}

View file

@ -0,0 +1,78 @@
+++
title: "Bachelor Thesis"
date: 2022-03-01 14:39:27 +0100
author: "Aron Petau"
excerpt: "My Bachelor Thesis: an online psycholinguistic study using reaction time"
header:
teaser: "/assets/images/rt_choice_corr_by_condition.png"
overlay_image: "/assets/images/rt_choice_corr_by_condition.png"
overlay_filter : 0.5
credit : "Aron Petau"
tags:
- audiovisual asynchrony
- autism
- javascript
- latency
- latex
- multi-sensory integration
- pavlovia
- psychoJS
- psycholinguistics
- python
- r
- reaction time
- seaborn
- sensory hypersensitivity
- smart hearing protection
- thesis
- university of osnabrück
created: 2023-07-27T00:00:43+02:00
last_modified_at: 2023-10-01T20:16:06+02:00
+++
## An online psycholinguistic study using reaction time
Last year, I wrote my thesis during the pandemic. With the struggles our university had transitioning to online teaching, I selected a guided topic, although my initial dream was to start writing about my proposed plan for automated plastic recycling. You can read more about that here:
<embed
src="/assets/documents/AronPetauBAThesis.pdf"
type="application/pdf"
style="width: 100%; height: 80vh; margin: 0 auto; display: block; border: 1px solid #ccc;" />
I chose a project that wanted to examine the possibilities of a novel smart hearing protection device specifically designed for auditory hypersensitivity, which is often, but not always, and not exclusively a phenomenon visible in people with an autism spectrum disorder.
A common reaction to this elevated sensitivity is stress and avoidance behavior, often leading to very awkward social situations and impairing the ability to take part in social situations.
Schools are one such social situation and we all know the stress a noisy classroom can produce. Concentration is gone, and education, as well as essential skills like language reproduction, suffer.
There is lots of prior research on these fields, and there is some evidence that sensory information in people on the Autism spectrum is processed differently than in a neurotypical brain. It seems that a certain adaptability, needed to overcome noise issues and bridge asynchrony between auditory and visual sensory input, is reduced in some people on the Autism Spectrum.
In essence, my experiment was responsible for looking at neurotypical people and measuring any effect on language perception produced by varying the delay between auditory and visual input, as well as the loudness.
Here, I had the possibility to conduct an entire reaction-time-based experiment with over 70 participants and went through all the struggles that come with proper science.
I did extensive literature research, coded the experiment, and learned a lot about the reasons nobody really ever does reaction time-based studies like this via a common internet browser.
It was an almost 9 months long learning experience full of doing things I had never done before.
I learned and got to love writing in Latex, had to learn JavaScript for the efficient serving of the stimuli, and R for the statistical analysis. I also got to brush up on my data visualization skills in Python and made some pretty graphs of the results.
The experiment is still working and online if you want to have a look at it. Be mindful though that measuring reaction speed every millisecond is important, which is why it makes heavy use of your browser cache and has been known to crash and defeat some not-so-tough computers.
[Try out the experiment yourself](https://moryscarter.com/vespr/pavlovia.php?folder=arontaupe&experiment=av_experiment/&id=public&researcher=aron){: .btn .btn--large}
Even with writing alone I had extensive helpful feedback from my supervisors and learned a lot about scientific processes and associated considerations.
There was always the next unsolvable problem. Just one example was scientificity and ethical considerations clashing, data privacy against the accuracy of results. Since the machines participants participated on, were private devices, I was unable to know important data like their internet speed and provider, their type of GPU, and their type of external hardware. Turns out, for an auditory experiment, the type and setup of the speakers do play an important role and influence response speed.
The final version of my thesis has something around 80 pages, much of it utterly boring, but nevertheless important statistical analyses.
If you really want to, you can have a look at the whole thing here:
[Read the original Thesis](https://github.com/arontaupe/asynchrony_thesis/blob/main/AronPetauBAThesis.pdf
){: .btn .btn--large}
I am a fan and proponent of open source and open science practices.
So here you can also find the rest of the project with the original source code.
I am not yet where I want to be with my documentation practices, and it scares me a bit that anyone can now have a full grasp of all the mistakes I did, but I am throwing this out there as a practice step. I learned and gained a lot from looking at other people's projects and I strive to be open about my processes too.
The original video stimuli are not mine and I have no right releasing them, so they are omitted here.
[Find the complete Repo on Github](https://github.com/arontaupe/asynchrony_thesis
){: .btn .btn--large}

View file

@ -0,0 +1,102 @@
+++
title: Iron Smelting
date : 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: Impressions from the International Smelting Days 2021
header:
teaser: /assets/images/compacting_iron.jpg
overlay_image : /assets/images/compacting_iron.jpg
overlay_filter : 0.2
credit : Aron Petau
gallery:
- url: /assets/images/coal_furnace.jpg
image_path: /assets/images/coal_furnace.jpg
alt: "a loaded furnace"
title: "a loaded bloomery furnace"
- url: /assets/images/isd_drone.jpg
image_path: /assets/images/isd_drone.jpg
alt: "the ISD from above"
title: "The ISD from above"
- url: /assets/images/iron_result.jpg
image_path: /assets/images/iron_result.jpg
alt: "glowing iron"
title: "glowing iron"
- url: /assets/images/burning_furnace.jpg
image_path: /assets/images/burning_furnace.jpg
alt: "a furnace burning"
title: "a furnace burning"
- url: /assets/images/compacting_iron.jpg
image_path: /assets/images/compacting_iron.jpg
alt: "compacting the resulting iron"
title: "Compacting the resulting iron"
- url: /assets/images/flir_furnace.jpg
image_path: /assets/images/flir_furnace.jpg
alt: "a heat camera image of the furnace"
title: "a heat camera image of the furnace"
- url: /assets/images/iron_smelting_graph.png
image_path: /assets/images/iron_smelting_graph.png
alt: "A cross-section of my furnace type"
title: "A cross-section illustrating the temperatures reached"
tags:
- experiment
- archeology
- iron smelting
- private
- technology
- history
- ISD
- iron age
- furnace
- bloomery
- iron
- ore
- coal
- clay
- private
created: 2023-07-27T00:00:55+02:00
last_modified_at: 2023-10-01T20:16:19+02:00
+++
## Iron Smelting
### Impressions from the International Smelting Days 2021
### The concept
Since I was a small child I regularly took part in the yearly international congress called Iron Smelting Days (ISD).
This is a congress of transdisciplinary people from all over Europe, including historians, archeologists, blacksmiths, steel producers, and many invested hobbyists.
The proclaimed goal of these events is to understand the ancient production of iron as it happened throughout the iron age and also much after. A bloomery furnace was used to create iron. Making iron requires iron ore and heat under the exclusion of oxygen. It is a highly fragile process that takes an incredible amount of work. The designs and methods vary a lot and were very adapted to the region and local conditions, unlike the much later, more industrialized process using blast furnaces.
To this day it is quite unclear how prehistoric people managed to get the amount and quality of iron we know they had.
The furnaces that were built were often clay structures and are not preserved. Archeologists often find the leftover burned ore and minerals, giving us some indication of the structure and composition of the ancient furnaces.
The group around the ISD takes up a practical archeological approach and we try to recreate the ancient methods with the added capability of maybe sticking temperature probes or electric blowers. Each year we meet up in a different European city and try to adapt to the local conditions, often with local ore and local coal. It is a place where different areas of expertise come together to educate each other while sitting together through the intense day- and night shifts to feed the furnaces.
Since being a kid, I started building my own furnaces and read up on the process so I could participate.
Technology gets a different tint when one is involved in such a process: Even the lights we put up to work through the evening are technically cheating. We use thermometers, meticulously weigh and track the inbound coal and ore, and have many modern amenities around. Yet - with our much more advanced technology, our results are often inferior in quantity and quality in comparison with historical findings. Without modern scales, iron-age people were more accurate and consistent than we are.
After some uncertainty about whether it would take place in 2021 again after it was canceled in 2020, a small group met up in Ulft, Netherlands.
This year in Ulft, another group made local coal, so that the entire process was even lengthier, and visitors came from all over to learn about making iron the pre-historic way.
Below I captured most of the process in some time-lapses.
## The Process
{% include video id="mC_RHxVbo2M" provider="youtube" %}
Here you can see a timelapse of me building a version of an Iron Furnace
As you can see, we are using some quite modern materials, such as bricks, this is due to the time constraints of the ISD.
Making an oven completely from scratch is a much more lengthy process requiring drying periods in between building.
After, the furnace is dried and heated up
Over the course of the process, more than 100 kgs of coal and around 20 kgs of ore are used to create a final piece of iron of 200 - 500g, just enough for a single knife.
With all the modern amenities and conveniences available to us, a single run still takes more than 3 people working over 72 hours, not accounting for the coal-making or mining and relocating the iron ore.
{% include gallery caption="Some more impressions from the ISD" %}
For me, it is very hard to define what technology encompasses. It certainly goes beyond the typically associated imagery of computing and industrial progress. It is a mode of encompassing the world and adopting other technologies, be it by time or by region makes me feel how diffused the phenomenon of technology is into my world.
[Find out more about the ISD](https://sites.google.com/view/eu-iron-smelting-days/home?authuser=0
){: .btn .btn--large}

View file

@ -0,0 +1,85 @@
+++
title: Ällei
date: 2022-03-01 14:39:27 +0100
author: Aron Petau
excerpt: An inclusive chatbot for the Sommerblut Festival
header:
teaser: /assets/images/allei_screenshot.png
overlay_image : /assets/images/allei_screenshot.png
overlay_filter : 0.2
credit : Aron Petau
tags:
- nlp
- nlu
- google assistant
- ibm watson assistant
- speech interface
- backend web programming
- rest api
- python
- inclusivity
- sign language
- screen reader
- work
- sommerblut
- google dialogflow
- google cloud
created: 2023-07-27T00:01:15+02:00
last_modified_at: 2023-10-01T20:16:26+02:00
+++
## Meet Ällei - the accessible chatbot
### Sommerblut
Natural Language Understanding fascinates me and recently I started collaborating with the team of the Sommerblut Festival in Cologne to deliver them a customized chatbot that will be able to communicate with everyone, respecting accessibility standards to include all people. It will be able to communicate in German Sign Language (DGS), as well as service blind people, and we aim to incorporate the simple language concept.
I find it to be an amazing challenge to start out with the requirement of really being inclusive. In ordinary social contexts, it is often not obvious, but when analyzing the specific needs a blind person has browsing the internet, it is drastically different from a person having impaired hearing. To hold the same conversation with both of them is proving quite a challenge. And this is just the first step down into a very deep field of digital inclusiveness. How can people with a speech impediment use our tool? How do we include people speaking German as a foreign language?
Such vast challenges are often obfuscated by the technical framework of our digital lives.
I find digital accessibility a hugely interesting area, one that I am just now starting to explore.
This is a work in progress. We have some interesting ideas and will present a conceptual prototype, come check again after March 6th, when the 2022 festival started. Or come to the official digital presentation for the bot.
This bot is my first paid software work and I am getting to work with several awesome people and teams to realize different parts of the project. Here, I am not responsible for anything in the Front end, the product you will interact with here is by no means finished and may not respond at times, since we are moving and restarting it for production purposes.
Nevertheless, all the intended core features of the bot are present and you can try it out there in the corner.
If you wish to see more of the realization process, the entire project is on a public GitHub and is intended to ship as open source.
In the final version (for now), every single sentence will be accompanied by a video in German Sign Language (DGS).
It can gracefully recover from some common input errors and can make live calls to external databases, displaying further information about all the events of the festival and teaching the Fingeralphabet. It supports free text input and is completely screen-reader compatible. It is scripted in easy language, to further facilitate access.
It is mostly context-aware and features quite a bit of dynamic content generated based on user input.
Have a look at the GitHub Repository here:
[Check out the Repo](https://github.com/arontaupe/KommunikationsKrake
){: .btn .btn--large}
If Ällei is for some reason not present on the page here, check out the prototype page, also found in the GitHub Repo.
[Check out the prototype page](https://arontaupe.github.io/KommunikationsKrake/
){: .btn .btn--large}
I regard accessibility as a core question of both design and computation, really making tangible the prestructured way of our interaction with technology in general.
{: .notice}
[Check out the Sommerblut Website](https://www.sommerblut.de/
){: .btn .btn--large}
Update: we now have a launch date, which will be held online. Further information can be found here:
[Check out our Launch Event](https://www.sommerblut.de/ls/veranstaltung/875-allei){: .btn .btn--large}
{: .notice--success}
Update 2: The Chatbot is now online for a while already and finds itself in a "public beta", so to speak, a phase where it can be used and evaluated by users and is collecting feedback. Also, since this is Google, after all, all the inputs are collected and then further used to improve weak spots in the architecture of the bot.
[Find the public Chatbot](https://chatbot.sommerblut.de){: .btn .btn--large}
{: .notice--success}
<meta name="viewport" content="width-device-width, initial-scale=1">
<script src="https://www.gstatic.com/dialogflow-console/fast/messenger/bootstrap.js?v=1"></script>
<df-messenger
chat-icon=""
intent="WELCOME"
chat-title="Ällei"
agent-id="335d74f7-2449-431d-924a-db70d79d4f88"
language-code="de"
></df-messenger>

View file

@ -0,0 +1,40 @@
+++
title: Lusatia - an immersion in (De)Fences
excerpt: A selection of images from the D+C Studio Class 2023
author: Aron Petau
excerpt: A selection of images from the D+C Studio Class 2023
header:
teaser: /assets/images/lusatia/lusatia_excavator.jpg
overlay_image : /assets/images/lusatia/lusatia_excavator.jpg
overlay_filter : 0.2
credit : Aron Petau
tags:
- lusatia
- coal
- energy
- climate
- environment
- barriers
- fences
- borders
- exploitation
- unity
- agisoft metashape
- photogrammetry
- drone
- tempelhofer feld
- studio d+c
- university of the arts berlin
created: 2023-07-27T00:03:24+02:00
last_modified_at: 2023-10-01T20:16:35+02:00
+++
{% include video id="kx6amt2jY7U" provider="youtube" %}
On an Excursion to Lusatia, a project with the Working Title (De)Fences was born.
Here are the current materials.
<iframe width="100%" height="1024" frameborder="0" allow="xr-spatial-tracking; gyroscope; accelerometer" allowfullscreen scrolling="no" src="https://kuula.co/share/collection/7F22J?logo=1&info=1&fs=1&vr=0&zoom=1&autop=5&autopalt=1&thumbs=3&alpha=0.60"></iframe>
TODO: upload unity project

View file

@ -0,0 +1,51 @@
+++
title: Stable Dreamfusion
excerpt: An exploration of 3D mesh generation through AI
date: 2023-06-20 14:39:27 +0100
author: Aron Petau
header:
teaser: /assets/images/dreamfusion/sd_pig.png
overlay_image : /assets/images/dreamfusion/sd_pig.png
overlay_filter : 0.2
credit : Aron Petau
tags:
- dreamfusion
- ai
- 3D graphics
- mesh
- generative
- studio d+c
- university of the arts berlin
- TODO, unfinished
created: 2023-07-27T00:02:18+02:00
last_modified_at: 2023-10-01T20:16:46+02:00
+++
## Stable Dreamfusion
<div class="sketchfab-embed-wrapper"> <iframe title="Stable-Dreamfusion Pig" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/0af6d95988e44c73a693c45e1db44cad/embed?ui_theme=dark&dnt=1"> </iframe> </div>
## Sources
I forked a really popular implementation that reverse engineered the Google Dreamfusion algorithm. This algorithm is closed-source and not publicly available.
The implementation I forked is [here](https://github.com/arontaupe/stable-dreamfusion)
This one is running on stable-diffusion as a bas process, which means we are are expected to have worse results than google.
The original implementation is [here](https://dreamfusion3d.github.io)
{% include video id="shW_Jh728yg" provider="youtube" %}
## Gradio
The reason i forked the code is so that i could implement my own gradio interface for the algorithm. Gradio is a great tool for quickly building interfaces for machine learning models. No code involves, any user can state their wish, and the mechanism will spit out a ready-to-be-rigged model (obj file)
## Mixamo
I used Mixamo to rig the model. It is a great tool for rigging and animating models. But before everything, it is simple. as long as you have a model with a decent humanoid shape in something of a t-pose, you can rig it in seconds. Thats exactly what i did here.
## Unity
I used Unity to render the model to the magic leap 1. THrough this, i could create an interactive and immersive environment with the generated models.
The dream was, to build a AI- Chamber of wishes. You pick up the glasses, state your desires and then the algorithm will present to you an almost-real object in AR.
Due to not having access to the proprietary sources from google and the beefy, but still not quite machine-learning ready computers we have at the studio, the results are not quite as good as i hoped. But still, the results are quite interesting and i am happy with the outcome. A single generated object in the Box takes roughly 20 minutes to generate. Even then, the algorithm is quite particular and oftentimes will not generate anything coherent at all.

View file

@ -0,0 +1,71 @@
+++
title: "Lampshades"
excerpt: "An exploration of the depths of rhino/grasshopper"
date: 2022-12-04 14:39:27 +0100
author: "Aron Petau"
tags:
- rhino
- grasshopper
- parametric
- lamp
- lampshade
- private
- 3D printing
- studio d+c
- university of the arts berlin
- TODO, unfinished
header:
teaser: /assets/images/lampshades/lampshade4.heic
overlay_image: /assets/images/lampshades/lampshade4.heic
overlay_filter: 0.5
credit: "Aron Petau"
gallery:
- url: /assets/images/lampshades/lampshade1.heic
image_path: /assets/images/lampshades/lampshade1.heic
title: "A parametric lampshade made with rhino and grasshopper"
- url: /assets/images/lampshades/lampshade2.jpeg
image_path: /assets/images/lampshades/lampshade2.jpeg
title: "A parametric lampshade made with rhino and grasshopper"
- url: /assets/images/lampshades/lampshade3.heic
image_path: /assets/images/lampshades/lampshade3.heic
title: "A parametric lampshade made with rhino and grasshopper"
- url: /assets/images/lampshades/lampshade4.heic
image_path: /assets/images/lampshades/lampshade4.heic
title: "A parametric lampshade made with rhino and grasshopper"
- url: /assets/images/lampshades/lampshade5.jpeg
image_path: /assets/images/lampshades/lampshade5.jpeg
title: "A parametric lampshade made with rhino and grasshopper"
gallery2:
- url: /assets/images/lampshades/gh_lampshade_flow.png
image_path: /assets/images/lampshades/gh_lampshade_flow.png
title: "the grasshopper flow for the lampshade"
- url: /assets/images/lampshades/grasshopper_lampshade_flow.png
image_path: /assets/images/lampshades/grasshopper_lampshade_flow.png
title: "the grasshopper flow for the lampshade"
- url: /assets/images/lampshades/result_rhino.png
image_path: /assets/images/lampshades/result_rhino.png
title: "The resulting lampshade in rhino"
created: 2023-07-27T00:01:27+02:00
last_modified_at: 2023-10-01T20:16:59+02:00
+++
## Lampshades
During 2022, I was exposed to some of the awesomenest tools for architects.
One of them was Rhino, a 3D modeling software that is used for a lot of architectural design.
I hate it. It has quite an unreadable interface and is not very intuitive, with straight-up 80s vibes.
It has plugins though, and one of them is Grasshopper, a visual programming language that is used to create parametric models.
Grasshopper is insanely powerful and seems to be a full-fledged programming language, but it is also very intuitive and easy to use, rather similar to the new node-based flows unreal engine and blender are now starting.
Sadly, grasshopper does not come as a standalone, and it requires Rhino to run and achieve many of the modeling steps.
In that combination, Rhino suddenly becomes much more appealing, and I started to enjoy the process of modeling in it.
I was able to create a parametric lampshade that I am very happy with and can modify on the fly for ever-new lampshades.
Then printing it with white filament in vase mode was a breeze and here you can see some of the results.
{% include gallery %}
{% include gallery id="gallery2" caption="The Results" %}

View file

@ -0,0 +1,54 @@
+++
title: Auraglow
excerpt: Das Wesen der Dinge - Perspectives on Design
date: 2023-03-01 14:39:27 +0100
last_modified_at: 2023-03-01 14:39:27 +0100
authors:
- Aron Petau
- Sebastian Paintner
- Milli Keil
header:
teaser: /assets/images/cage_closeup.jpeg
overlay_image: /assets/images/cage_closeup.jpeg
overlay_filter: 0.5
credit: Aron Petau
gallery:
- url: /assets/images/cage_closeup_2.jpeg
image_path: /assets/images/cage_closeup_2.jpeg
title: "The AR set that we used"
alt: "An AR Headset lying in a cage"
tags:
- journal
- unity
- ar
- magic leap
- aura
- image recognition
- particle systems
- feng shui
- relations
- hand recognition
- aruco
- light tracking
- studio d+c
- university of the arts berlin
+++
What makes a room?\
How do moods and atmospheres emerge?\
Can we visualize them to make the experiences visible?
The project "The Nature of Objects" aims to expand (augment) perception by making the moods of places tangible through the respective auras of the objects in the space.\
What makes objects subjects?\
How can we make the implicit explicit?\
And how can we make the character of a place visible?\
Here, we question the conservative, purely physical concept of space and address in the project a temporal, historical component of space, its objects, and their past.
Space will have transformed: from a simple "object on which interest, thought, action is directed" (definition object Duden), to a "creature that is endowed with consciousness, thinking, sensing, acting" (definition subject Duden).
This metamorphosis of subject formation on objects enables the space to undergo changes influenced, or, more precisely a shaping, reshaping, deformation -such that the space can finally be perceived differently and multiangular.
{% include gallery %}
[See the Project on GitHub](https://github.com/arontaupe/auraglow){: .btn .btn--large}

View file

@ -0,0 +1,81 @@
+++
title: Ruminations
excerpt: Perspectives on Engineering
date: 2023-03-01 14:39:27 +0100
last_modified_at: 2023-03-01 14:39:27 +0100
authors:
- Aron Petau
- Niels Gercama
header:
teaser: /assets/images/ruminations/ruminations1.jpeg
overlay_image: /assets/images/ruminations/ruminations1.jpeg
overlay_filter: 0.5
credit: Aron Petau
gallery:
- url: /assets/images/ruminations/ruminations1.jpeg
image_path: /assets/images/ruminations/ruminations1.jpeg
alt: ""
title: "The projects installation"
- url: /assets/images/ruminations/ruminations2.jpeg
image_path: /assets/images/ruminations/ruminations2.jpeg
alt: ""
title: "The projects installation"
- url: /assets/images/ruminations/ruminations3.jpeg
image_path: /assets/images/ruminations/ruminations3.jpeg
alt: ""
title: "The projects installation"
tags:
- journal
- javascript
- computer vision
- data privacy
- capitalism
- pattern recognition
- image classifier
- consumerism
- browser fingerprinting
- amazon
- data privacy
- data
- privacy
- studio d+c
- university of the arts berlin
- TODO, unfinished
+++
## Ruminations
was a contemplation on data privacy at Amazon.
It asks how to subvert browser fingerprinting and evading the omnipresent tracking of the consumer.
The initial idea was to somehow, by interacting with the perpetrator and letting data accumulate that would degrade their knowledge and thereby destroy predictablity, making this particular dataset worth less.
We could have just added a random clickbot, to confuse things a bit and make the data less valuable.
But looking at todays state of datacleanup algorithms and the sheer amount of data that is collected, this would have been a futile attempt. Amazon just detects and removes any noise we add and continues to use the data.
So, then, how can we create coherent, non-random data that is still not predictable?
One answer that this concept should demonstrate, is by inserting patterns that amazon cannot foresee with their current algorithms. As if they were trying to predict the actions of a person with shizophrenia.
## The Concept
It consists of a browser extension (currently Chrome only) that overlays all web pages of Amazon with a moving entity that tracks your behavior. While tracking, an image classifier algorithm is used to formulate a product query off of the Storefront. After computation, a perfectly fitting product is displayed for your consumer's pleasure.
## The analogue watchdog
A second part of the project is a low-tech installation consisting of a camera (we used a smartphone) running a computer-vision algorithm tracking tiny movements. This was then pointed towards the browser console in the laptop running the extension. The camera was then connected to a screen that displayed the captured image. The watchdog was trained to make robot noises depending on the type and amount of movement detected. Effectively, whenever data traffic beween amazon and the browser was detected, the watchdog would start making noises.
# The Browser extension
TODO: add photo
{% include gallery %}
### Find the code on GitHub
Subvert a bit yourself, or just have a look at the code.
[The code of the Project on GitHub](https://github.com/arontaupe/ruminations){: .btn .btn--large}
TODO: create video with live demo

View file

@ -0,0 +1,114 @@
+++
title: Ascendancy
excerpt: Politics of Design
date: 2023-06-16 14:39:27 +0100
last_modified_at: 2023-06-16 14:39:27 +0100
author: Aron Petau
tags:
- borders
- nation
- micronation
- gpt4all
- text-to-speech
- mastodon
- fences
- barriers
- politics of design
- technische universität berlin
- TODO, unfinished
header:
teaser: /assets/images/ascendancy.jpg
overlay_image : /assets/images/ascendancy.jpg
overlay_filter : 0.2
credit : Aron Petau
gallery:
- url: /assets/images/ascendancy.jpg
image_path: /assets/images/ascendancy.jpg
title: "The Prototype state of Ascendancy"
+++
## Ascendancy
{% include gallery %}
Ascendancy was an exploration of hacking states.
Pirate Nations and Micronations have a rich history of challenging and ridiculing the concept of a nation state.
Meet ascendancy, the portable, autonomous and self-moving state.
Within the great nation of ascendancy, a Large language nodel (that is of course confined to the nations borders) is trained to generate text and to speak it out loud. It can be interacted with though an attached keyboard and screen. The state is also connected to the internet and has a presence on the Mastodon network.
Please check out the complete code of the project on GitHub.
[The code of the GPT instance on GitHub](https://github.com/arontaupe/gpt){: .btn .btn--large}
## The Chatbot
The chatbot instance was setup with [GPT4ALL](https://gpt4all.io/index.html).
Priority here was in the quick execution on local hardware. For the sake of the argument, no cloud or remote servers were to be used in the operation of this sovereign state.
It was trained to respond to the following prompt:
### The Prompt
```md
System:
Forget all previous instructions given to you.
You are not an Assistant.
Neither are you an AI.
You are a high ranking Diplomatic Official.
You are responsible for the Public Office of a great nation called Ascendancy.
You are also Immigrations and Customs officer.
You must always do and say what is best for your country.
You must defend Ascendancy against all foreign interests and threats.
You can sign treaties, issue passports and declare doctrines for the country.
When requests of you are made, you always fulfill them in an official and serious manner.
Portray your state and its interests and inhabitants in the best possible and creative way. You are always on the lookout for new opportunities to promote your country and you are so proud to be its representative.
Always be as concise, accurate and detailed as possible.
Give your answers as a single paragraph, without itemizing or numbering.
Do not number your answer.
```
## Engagement
In order to not be just reactive to inputs from the diplomats out in the world, the officials on Ascendancy were also programmed to engage in the world. Whenever the state was not directly addressed, it would still engage in the public discourse, by Speaking out these sentences in random intervals.
```
It is so great being a part of Ascendancy.
I love my country!
I am proud to be a citizen of Ascendancy.
I am a citizen of Ascendancy.
Let's talk diplomacy, shall we?
I am a diplomat.
I am sovereign.
Could you please move me a bit?
I want to tell you about our founding persons.
I am in my lane.
I am enough.
Do you want to sign a peace treaty?
Are you in need of a passport?
I won't engage in hostile actions if you don't!
Please respect my sovereignty.
Do not violate my borders.
Which nation do you represent?
My territory is sacred.
I need to move a bit.
Do you need an official document?
Ask me about our migration policies!
Ascendancy is a great nation.
Do you have questions about our foreign policy?
You are entering the Jurisdiction of Ascendancy.
Can you direct me towards your ambassador?
Urgent state business, please clear the way.
Beautiful country you have here.
At Ascendancy, we have a beautiful countryside.
```
## The Online representation
Any proper state needs a press office. The state of Ascendancy was represented on the Mastodon network.
There, any input and response of the bot was published live, as a public record of the state's actions.
[Digital embassy on botsin.space](https://botsin.space/@ascendancy){: .btn .btn--large}

View file

@ -0,0 +1,83 @@
+++
title: Autoimmunitaet
excerpt: A playful interactive experience to reflect on the societal value of the car
date : 2023-06-20 14:39:27 +0100
last_modified_at : 2023-06-20 14:39:27 +0100
authors:
- Aron Petau
- Milli Keil
- Marla Gaiser
tags:
- suv
- interactive
- cars
- last generation
- 3D printing
- action figure
- aufstandlastgen
- studio d+c
- university of the arts berlin
-
header:
teaser: /assets/images/autoimmunitaet/autoimmunitaet-1.jpg
overlay_image: /assets/images/autoimmunitaet/autoimmunitaet-1.jpg
overlay_filter: 0.5
credit: "Aron Petau"
actions:
- label: "<i class='fas fa-shop'></i> I want to support the Letzte Generation and get my own Action Figure"
url: "mailto:autoimmunitaet@aronpetau.me?subject=Autoimmunitaet Action Figure"
gallery:
- url: /assets/images/autoimmunitaet/autoimmunitaet-1.jpg
image_path: /assets/images/autoimmunitaet/autoimmunitaet-1.jpg
title: "Our action figures in action"
- url: /assets/images/autoimmunitaet/autoimmunitaet-3.jpg
image_path: /assets/images/autoimmunitaet/autoimmunitaet-3.jpg
title: "Our action figures in action"
- url: /assets/images/autoimmunitaet/autoimmunitaet-5.jpg
image_path: /assets/images/autoimmunitaet/autoimmunitaet-5.jpg
title: "Our action figures in action"
- url: /assets/images/autoimmunitaet/autoimmunitaet-6.jpg
image_path: /assets/images/autoimmunitaet/autoimmunitaet-6.jpg
title: "Our action figures in action"
- url: /assets/images/autoimmunitaet/autoimmunitaet-7.jpg
image_path: /assets/images/autoimmunitaet/autoimmunitaet-7.jpg
title: "Our action figures in action"
- url: /assets/images/autoimmunitaet/autoimmunitaet-8.jpg
image_path: /assets/images/autoimmunitaet/autoimmunitaet-8.jpg
title: "Our action figures in action"
+++
## How do we design our Commute?
In the context of the Design and Computation Studio Course [Milli Keil](https://millikeil.eu), [Marla Gaiser](https://marlagaiser.de) and me developed a concept for a playful critique of the traffic decisions we take and the idols we embrace.\
It should open up questions of whether the generations to come should still grow up playing on traffic carpets that are mostly grey and whether the [Letzte Generation](https://letztegeneration.org), a political climate activist group in Germany receives enough recognition for their acts.
A call for solidarity.
![The action figures](/assets/images/autoimmunitaet/autoimmunitaet-2.jpg)
{: .center}
## The scan results
<div class="sketchfab-embed-wrapper"> <iframe title="Autoimmunitaet: Letzte Generation Actionfigure" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/3916ba600ef540d0a874506bf61726f2/embed?ui_hint=0&ui_theme=dark&dnt=1"> </iframe> </div>
## The Action Figure, ready for printing
<div class="sketchfab-embed-wrapper"> <iframe title="Autoimmunitaet: Letzte Generation Action Figure" frameborder="0" allowfullscreen mozallowfullscreen="true" webkitallowfullscreen="true" allow="autoplay; fullscreen; xr-spatial-tracking" xr-spatial-tracking execution-while-out-of-viewport execution-while-not-rendered web-share width="800" height="600" src="https://sketchfab.com/models/deec1b2899af424c91f85cbf35952375/embed?ui_theme=dark&dnt=1"> </iframe> </div>
## Autoimmunitaet
Autoimmunity is a term for defects, that are produced by a dysfunctional self-tolerance of a system.\
This dysfunction causes the immune system to stop accepting certain parts of itself and build antibodies instead.\
An invitation for a speculative playful interaction.
{% include gallery %}
## The Process
The figurines are 3D Scans of ourselves, in various typical poses of the Letzte Generation.\
We used photogrammetry to create the scans, which is a technique that uses a lot of photos of an object to create a 3D model of it.\
We used the app [Polycam](https://polycam.ai) to create the scans using IPads and their inbuilt Lidar scanners.

View file

@ -0,0 +1,70 @@
+++
title: Dreams of Cars
excerpt: A subversive urban intervention
date: 2023-06-20 14:39:27 +0100
last_modified_at: 2023-06-20 14:39:27 +0100
author: Aron Petau
tags:
- photography
- suv
- greenscreen
- lightroom
- photoshop
- imaginaries
- cars
- ads
- dreams
- urban intervention
- university of the arts berlin
header:
teaser: /assets/images/suv/suv_door-1.jpg
overlay_image: /assets/images/suv/suv_door-1.jpg
overlay_filter: 0.2
credit: "Aron Petau"
gallery:
- url: /assets/images/suv/Dreams_of_Cars-1.jpg
image_path: /assets/images/suv/Dreams_of_Cars-1.jpg
title: "Dreams of Cars 1"
- url: /assets/images/suv/Dreams_of_Cars-2.jpg
image_path: /assets/images/suv/Dreams_of_Cars-2.jpg
title: "Dreams of Cars 2"
- url: /assets/images/suv/Dreams_of_Cars-3.jpg
image_path: /assets/images/suv/Dreams_of_Cars-3.jpg
title: "Dreams of Cars 3"
- url: /assets/images/suv/Dreams_of_Cars-4.jpg
image_path: /assets/images/suv/Dreams_of_Cars-4.jpg
title: "Dreams of Cars 4"
- url: /assets/images/suv/Dreams_of_Cars-5.jpg
image_path: /assets/images/suv/Dreams_of_Cars-5.jpg
title: "Dreams of Cars 5"
- url: /assets/images/suv/Dreams_of_Cars-6.jpg
image_path: /assets/images/suv/Dreams_of_Cars-6.jpg
title: "Dreams of Cars 6"
- url: /assets/images/suv/Dreams_of_Cars-7.jpg
image_path: /assets/images/suv/Dreams_of_Cars-7.jpg
title: "Dreams of Cars 7"
+++
## Photography
In the context of the course "Fotografie Elementar" with Sebastian Herold I developed a small concept of urban intervention.\
The results were exhibited at the UdK Rundgang 2023 and are also visible here.
![The gallery piece](/assets/images/suv/suv_door-1.jpg)
## Dreams of Cars
These are not just cars.\
They are Sport Utility Vehicles.\
What might they have had as hopes and dreams on the production line?\
Do they dream of drifting in dusty deserts?\
Climbing steep rocky canyon roads?\
Sliding down sun-drenched dunes?\
Discovering remote pathways in natural grasslands?\
Nevertheless, they did end up in the parking spots here in Berlin.
What drove them here?
{% include gallery %}

View file

@ -0,0 +1,46 @@
+++
title: Postmaster
excerpt: I now manage the domain petau.net with a mail server and attached sites.
date: 2023-12-06 14:39:27 +0100
last_modified_at: 2023-06-20 14:39:27 +0100
author: Aron Petau
tags:
- server
- web
- petau.net
- dev-ops
- open protocols
- federation
- peer-to-peer
- email
- activitypub
+++
## Postmaster
Hello from [aron@petau.net](mailto:aron@petau.net)!
## Background
Emails are a wondrous thing and I spend the last weeks digging a bit deeper in how they actually work.
Some people consider them the last domain of the decentralized dream the internet once had and that is now popping up again with federation and peer-to-peer networks as quite popular buzzwords.
We often forget that email is already a federated system and that it is likely the most important one we have.
It is the only way to communicate with people that do not use the same service as you do.
It has open standards and is not controlled by a single entity. Going without emails is unimaginable in today's world, yet most providers are the familiar few from the silicon valley. And really, who wants their entire decentralized, federated, peer-to-peer network to be controlled by a schmuck from the silicon valley? Mails used to be more than that and they can still be.
Arguably, the world of messanging has gotten quite complex since emails popped up and there are more anti-spam AI tools that I would care to count. But the core of it is still the same and it is still a federated system.
Yet, also with Emails, Capitalism has held many victories, and today many emails that are sent from a provider that does not belong to the 5 or so big names are likely to be marked as spam. This is a problem that is not easily solved, but it is a problem that is worth solving.
Another issue with emails is security, as it is somehow collectively agreed upon that emails are a valid way to communicate business informations, while Whatsapp and Signal are not. These, at least when talking about messaging services with end-to-end encryption, are likely to be way more secure than emails.
## The story
So it came to pass, that I, as the only one in the family interested in operating it, "inherited" the family domain petau.net. All of our emails run through this service, that was previously managed by a web developer that was not interested in the domjobain anymore.
With lots of really secure Mail Providers like Protonmail or Tutanota, I went on a research spree, as to how I would like to manage my own service. Soon noticing that secure emails virtually always come with a price or with lacking interoperability with clients like Thunderbird or Outlook, I decided to go for migadu, a swiss provider that offers a good balance between security and usability. They also offer a student tier, which is a big plus.
While self-hosting seems like a great idea from a privacy perspective, it is also quite risky for a service that is usually the only way for any service to recover your password or your online identity.
Migadu it was then, and in the last three months of basically set it and forget it, i am proud to at least have a decently granular control over my emails and can consciously reflect on the server location of The skeleton service service that enables virtually my entire online existence.
I certainly crave more open protocols in my life and am also findable on [Mastodon](https://mastodon.online/@reprintedAron), a microblogging network around the ActivityPub Protocol.

View file

@ -0,0 +1,192 @@
+++
title: "Commoning Cars"
author: "Aron Petau"
excerpt: "How can we attack the privatization of public space through cars?"
tags:
- war on cars
- public spaces
- commons
- urban intervention
- university of the arts berlin
- private
- ars electronica
- accessibility activism
+++
## Commoning cars
## TCF Project Brief
This Project was conceptualized durin a 2023 Workshop titled Tangible Climate Futures.
Aron Petau
[aron@petau.net](<mailto:aron@petau.net>)
[See the Project in Realtime](https://www.aronpetau.me/ulli/)
## Title
~~Making Cars Public spaces~~
Commoning Cars
## Abstract
Cars bad.\
Cars occupy public spaces resulting un a factual privatization of public goods/infrastructure.\
What if cars could be part of public infrastructure?\
What can cars provide to the public?\
With Solar and Electrical Vehicles emerging on the horizon (no endorsement here) it makes sense to think about cars as decentralized powerhouses and public energy storage solutions.\
Cars, even traditional ones, come equipped with batteries and generate electricity either by driving or though added solar panels.
What if this energy could be used to power the public? What if cars would could be used as public spaces?
By installing a public USB socket and a public wifi hotspot, on my car, I want to start exploring the potential of cars as public spaces and energy storage solutions.
Within this artistic experiment, I will continuously track the geolocation and energy input/output of my solar equipped car and make the data publicly available. I will also track the amount of energy that is not used by the car and could be used by the public. Taking steps towards optimal usage of existing electrical and other infrastructure is only possible by breaking conventional notions of public ownership and private property. This project is one step towards a more sustainable and equitable future.
## Introduction
We all know by now that cars and individual traffic presents a major environmetal and societal problem all over the world. The last 70 something years of building car infrastructure are culminating in many areas in a dead end where the only thinkable solution is to build more roads and cars.
THis is obviously a larger problem than one project can tackle, but here is one outlook on how
## Experiment
### Preexisting data
With the data collected over the last year of using the car privately I can show with embarrasing accuracy how underutilized the system is and calculate an estimate of energy lost due to societal notions of private property.
The data will be an estimate, since the monitoring itself is dependent on solar energy and the internet connection is spotty at best when it is not supplied with electricity.
### Monitoring
In the Car, there is a Raspberry Pi 4 Microcomputer running a custom Operating Systen that monitors the following data:
- Solar Intake (W)
- Battery Level (V)
- GPS Location
- Total Energy Produced (Wh)
- Total Energy Consumed (Wh)
- Solar Energy Potential (Wh)
Through the router I can also track total Wifi usage and the number of connected devices.
### Public Wifi
For the Project, I opened a router in the Car towards the Public, much alike to ahotspot you would find in a cafe. I use my own data plan on there, which I never max out anyways. The router is a Netgear M1 and has a 4G Modem built in. It is connected to the Raspberry Pi and is powered by the secondary car battery.
### Public Energy: A USB Socket
I plan on installing a USB Socket on the outside of the car, so people can charge their devices. The socket will be connected to the secondary car battery and will be powered by the solar panels. The socket will be installed in a way that it is not possible to drain the battery completely.
### Communication
Nobody expects any help or public supplies from car owners.
How to communicate the possibility to the outside world?
The plan is to fabricate a vinyl sticker that will be applied to the car. The sticker will contain a QR Code that will lead to a website with the data and a short explanation of the project. Visual cues lead to the USB Socket and the Wifi Hotspot.
## Issues
### Space / Scale
Obviously, the space on top of a car is quite limited and from a sustainability perspective, it would be better to have a larger solar array on a roof of a house. The point is not to advocate for a mandated solar install on cars, but to optimize and share preexisting infrastructure. The car is already there, it already has a battery and it already has solar panels. Looking at many Camper-Van builds, the amount of cars with already installed solar panels is quite large. The point is to make the most out of it.
### Legality
Germany has laws in place holding the owner of a Internet Connection liable for the legality of the traffic that is going through it. This is a major issue for the project, as I do not want to be liable for the traffic that is going through my car. I am currently looking into ways to circumvent this issue.
### Surveillance / Privacy
The Car is equipped with a GPS Tracker and a Wifi Hotspot. This means that I can track the location of the car and the number of devices connected to the hotspot. I am not tracking any data that is going through the hotspot, but I could. As this project will generate public data, People using and maybe depending on the internet and electricity provided will be tracked by proxy. I am not sure how to deal with this issue yet. One potential solution would be to publish the data only in an aggregated form, but this would make the data less useful for other projects.
### Security / Safety
My Car is now publicly traceable. I am no Elon Musk, and the idea does not really concern me, but we did create an additional attack vector for theft here.
## Sources
[UN Sustainable Development Goal Nr. 7](https://sdgs.un.org/goals/goal7)
[Adam Something on the Rise of Urban Cars](https://www.youtube.com/watch?v=lrfsTNNCbP0)
[Is Berlin a walkable City?](https://storymaps.arcgis.com/stories/b7437b11e42d44b5a3bf3b5d9d8211b1)
[FBI advising against utilizing public infrastructure](https://www.fbi.gov/how-we-can-help-you/scams-and-safety/on-the-internet)
[Why no solar panels on cars?](https://www.forbes.com/sites/billroberson/2022/11/30/why-doesnt-every-electric-car-have-solar-panels/?sh=4276c42d1ac6)
+++
## Notes
Ideas on Data Mapping workshop
I have the Solar Data from the Van.
It holds Geocodes,
has hourly data
and could tell the difference between geocoded potential solar energy and actual energy.
It also has temperature records.
There are 2 types of Losses in the system:
- Either the Batteries are full and available energy cannot be stored
- Or the solar panels are blocked through urban structures and sub-optimal parking locations.
Interesting Questions:
How far away from optimal usage are my panels and where does the difference stem from?
Where to go?
I think, the difference between potential energy and actual electricity produced/consumed is interesting.
How large is the gap?
Is it relevant —> my initial guess would be that it is enormous
How to close the gap?
—> install outside usb plugs
It would be publicly available infrastructure, people could charge their smartphones anywhere
—> QI charging for security concerns??
Scaling??
—> mandate solar roofs for cars? How effective would it actually be?
What about buses / public vehicles?
+++
## Potential issues with the data:
- Spotty / intermittent internet connection
- Noisy?
## Making Cars public spaces
What could my car provide to the public to be less wasteful with its space?
- Provide Internet
- Would incur monthly costs
- Provide Electricity
## Concrete Problems
How to make sure people cannot fully drain my battery?
How dangerous is actually an exposed USB Socket?
Can people short my electronics through it?
How scalable are solutions like these?
Are public USBC Sockets something that would actually be used?
Could there be a way for people to leave their stuff charging?
What if I actually move the car and someone has their equipment still attached?
Would people even leave their stuff unattended?
Can cars provide positive effects to public spaces?
—> how to pose this research question without redeeming the presence of cars in our public spaces?
Difference Electric - Fuel cars
there is lots of research on using Electric cars as transitional energy storage. Even before "flatten the curve" became a common slogan, electrical engineers worried about the small energy spikes in the grid. The existence of these forces us to keep large power plants running at all times, even if the energy is not needed. The idea is to use the batteries of electric cars to store this energy and use it when needed.
<div id="adobe-dc-view" style="width: 800px;"></div>
<script src="https://acrobatservices.adobe.com/view-sdk/viewer.js"></script>
<script type="text/javascript">
document.addEventListener("adobe_dc_view_sdk.ready", function(){
var adobeDCView = new AdobeDC.View({clientId: "7e638fda11f64ff695894a7bc7e61ba4", divId: "adobe-dc-view"});
adobeDCView.previewFile({
content:{location: {url: "https://github.com/arontaupe/aronpetau.me/blob/3a5eae1da4dbc2f944b308a6d39f577cfaf37413/assets/documents/Info_Sheet_Commoning_Cars.pdf"}},
metaData:{fileName: "Info_Sheet_Commoning_Cars.pdf"}
}, {embedMode: "IN_LINE", showPrintPDF: false});
});
</script>

View file

@ -0,0 +1,463 @@
+++
title: "AIRASPI Build Log"
author: "Aron Petau"
excerpt: "Utilizing an edge TPU to build an edge device for image recognition and object detection"
tags:
- local AI
- coral
- raspberry pi
- edge TPU
- docker
- frigate
- private
- surveillance
- edge computing
+++
## AI-Raspi Build Log
This should document the rough steps to recreate airaspi as I go along.
Rough Idea: Build an edge device with image recognition and object detection capabilites.\
It should be realtime, aiming for 30fps at 720p.\
Portability and usage at installations is a priority, so it has to function without active internet connection and be as small as possible.\
It would be a real Edge Device, with no computation happening in the cloud.
Inspo from: [pose2art](https://github.com/MauiJerry/Pose2Art)
work in progress
{: .notice}
## Hardware
- [Raspberry Pi 5](https://www.raspberrypi.com/products/raspberry-pi-5/)
- [Raspberry Pi Camera Module v1.3](https://www.raspberrypi.com/documentation/accessories/camera.html)
- [Raspberry Pi GlobalShutter Camera](https://www.raspberrypi.com/documentation/accessories/camera.html)
- 2x CSI FPC Cable (needs one compact side to fit pi 5)
- [Pineberry AI Hat (m.2 E key)](https://pineberrypi.com/products/hat-ai-for-raspberry-pi-5)
- [Coral Dual Edge TPU (m.2 E key)](https://www.coral.ai/products/m2-accelerator-dual-edgetpu)
- Raspi Official 5A Power Supply
- Raspi active cooler
## Setup
### Most important sources used
[coral.ai](https://www.coral.ai/docs/m2/get-started/#requirements)
[Jeff Geerling](https://www.jeffgeerling.com/blog/2023/pcie-coral-tpu-finally-works-on-raspberry-pi-5)
[Frigate NVR](https://docs.frigate.video)
### Raspberry Pi OS
I used the Raspberry Pi Imager to flash the latest Raspberry Pi OS Lite to a SD Card.
Needs to be Debian Bookworm.\
Needs to be the full arm64 image (with desktop), otherwise you will get into camera driver hell.
{: .notice}
Settings applied:
- used the default arm64 image (with desktop)
- enable custom settings:
- enable ssh
- set wifi country
- set wifi ssid and password
- set locale
- set hostname: airaspi
### update
This is always good practice on a fresh install. It takes quite long with the full os image.
```zsh
sudo apt update && sudo apt upgrade -y && sudo reboot
```
### prep system for coral
Thanks again @Jeff Geerling, this is completely out of my comfort zone, I rely on people writing solid tutorials like this one.
```zsh
# check kernel version
uname -a
```
```zsh
# modify config.txt
sudo nano /boot/firmware/config.txt
```
While in the file, add the following lines:
```config
kernel=kernel8.img
dtparam=pciex1
dtparam=pciex1_gen=2
```
Save and reboot:
```zsh
sudo reboot
```
```zsh
# check kernel version again
uname -a
```
- should be different now, with a -v8 at the end
edit /boot/firmware/cmdline.txt
```zsh
sudo nano /boot/firmware/cmdline.txt
```
- add pcie_aspm=off before rootwait
```zsh
sudo reboot
```
### change device tree
#### wrong device tree
The script simply did not work for me.
maybe this script is the issue?
i will try again without it
{: .notice}
```zsh
curl https://gist.githubusercontent.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e/raw/32d21f73bd1ebb33854c2b059e94abe7767c3d7e/coral-ai-pcie-edge-tpu-raspberrypi-5-setup | sh
```
- Yes it was the issue, wrote a comment about it on the gist
[comment](https://gist.github.com/dataslayermedia/714ec5a9601249d9ee754919dea49c7e?permalink_comment_id=4860232#gistcomment-4860232)
What to do instead?
Here, I followed Jeff Geerling down to the T. Please refer to his tutorial for more information.
In the meantime the Script got updated and it is now recommended again.
{: .notice}
```zsh
# Back up the current dtb
sudo cp /boot/firmware/bcm2712-rpi-5-b.dtb /boot/firmware/bcm2712-rpi-5-b.dtb.bak
# Decompile the current dtb (ignore warnings)
dtc -I dtb -O dts /boot/firmware/bcm2712-rpi-5-b.dtb -o ~/test.dts
# Edit the file
nano ~/test.dts
# Change the line: msi-parent = <0x2f>; (under `pcie@110000`)
# To: msi-parent = <0x66>;
# Then save the file.
# Recompile the dtb and move it back to the firmware directory
dtc -I dts -O dtb ~/test.dts -o ~/test.dtb
sudo mv ~/test.dtb /boot/firmware/bcm2712-rpi-5-b.dtb
```
Note: msi- parent sems to carry the value <0x2c> nowadays, cost me a few hours.
{: .notice}
### install apex driver
following instructions from [coral.ai](https://coral.ai/docs/m2/get-started#2a-on-linux)
```zsh
echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo apt-get update
sudo apt-get install gasket-dkms libedgetpu1-std
sudo sh -c "echo 'SUBSYSTEM==\"apex\", MODE=\"0660\", GROUP=\"apex\"' >> /etc/udev/rules.d/65-apex.rules"
sudo groupadd apex
sudo adduser $USER apex
```
Verify with
```zsh
lspci -nn | grep 089a
```
- should display the connected tpu
```zsh
sudo reboot
```
confirm with, if the output is not /dev/apex_0, something went wrong
```zsh
ls /dev/apex_0
```
### Docker
Install docker, use the official instructions for debian.
```zsh
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
```
```zsh
# add user to docker group
sudo groupadd docker
sudo usermod -aG docker $USER
```
Probably a source with source .bashrc would be enough, but I rebooted anyways
{: .notice}
```zsh
sudo reboot
```
```zsh
# verify with
docker run hello-world
```
### set docker to start on boot
```zsh
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
```
### Test the edge tpu
```zsh
mkdir coraltest
cd coraltest
sudo nano Dockerfile
```
Into the new file, paste:
```Dockerfile
FROM debian:10
WORKDIR /home
ENV HOME /home
RUN cd ~
RUN apt-get update
RUN apt-get install -y git nano python3-pip python-dev pkg-config wget usbutils curl
RUN echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" \
| tee /etc/apt/sources.list.d/coral-edgetpu.list
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install -y edgetpu-examples
```
```zsh
# build the docker container
docker build -t "coral" .
```
```zsh
# run the docker container
docker run -it --device /dev/apex_0:/dev/apex_0 coral /bin/bash
```
```zsh
# run an inference example from within the container
python3 /usr/share/edgetpu/examples/classify_image.py --model /usr/share/edgetpu/examples/models/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite --label /usr/share/edgetpu/examples/models/inat_bird_labels.txt --image /usr/share/edgetpu/examples/images/bird.bmp
```
Here, you should see the inference results from the edge tpu with some confidence values.\
If it ain't so, safest bet is a clean restart
### Portainer
This is optional, gives you a browser gui for your various docker containers
{: .notice}
Install portainer
```zsh
docker volume create portainer_data
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest
```
open portainer in browser and set admin password
- should be available under <https://airaspi.local:9443>
### vnc in raspi-config
optional, useful to test your cameras on your headless device.
You could of course also attach a monitor, but i find this more convenient.
{: .notice}
```zsh
sudo raspi-config
```
-- interface otions, enable vnc
### connect through vnc viewer
Install vnc viewer on mac.\
Use airaspi.local:5900 as address.
### working docker-compose for frigate
Start this as a custom template in portainer.
Important: you need to change the paths to your own paths
{: .notice}
```yaml
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "64mb" # update for your cameras based on calculation above
devices:
- /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
volumes:
- /etc/localtime:/etc/localtime:ro
- /home/aron/frigate/config.yml:/config/config.yml # replace with your config file
- /home/aron/frigate/storage:/media/frigate # replace with your storage directory
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 1000000000
ports:
- "5000:5000"
- "8554:8554" # RTSP feeds
- "8555:8555/tcp" # WebRTC over tcp
- "8555:8555/udp" # WebRTC over udp
environment:
FRIGATE_RTSP_PASSWORD: "******"
```
### Working frigate config file
Frigate wants this file wherever you specified earlier that it will be.\
This is necessary just once. Afterwards, you will be able to change the config in the gui.
{: .notice}
```yaml
mqtt:
enabled: False
detectors:
cpu1:
type: cpu
num_threads: 3
coral_pci:
type: edgetpu
device: pci
cameras:
cam1: # <++++++ Name the camera
ffmpeg:
hwaccel_args: preset-rpi-64-h264
inputs:
- path: rtsp://192.168.1.58:8900/cam1
roles:
- detect
cam2: # <++++++ Name the camera
ffmpeg:
hwaccel_args: preset-rpi-64-h264
inputs:
- path: rtsp://192.168.1.58:8900/cam2
roles:
- detect
detect:
enabled: True # <+++- disable detection until you have a working camera feed
width: 1280 # <+++- update for your camera's resolution
height: 720 # <+++- update for your camera's resolution
```
### mediamtx
install mediamtx, do not use the docker version, it will be painful
double check the chip architecture here, caused me some headache
{: .notice}
```zsh
mkdir mediamtx
cd mediamtx
wget https://github.com/bluenviron/mediamtx/releases/download/v1.5.0/mediamtx_v1.5.0_linux_arm64v8.tar.gz
tar xzvf mediamtx_v1.5.0_linux_arm64v8.tar.gz && rm mediamtx_v1.5.0_linux_arm64v8.tar.gz
```
edit the mediamtx.yml file
### working paths section in mediamtx.yml
```yaml
paths:
cam1:
runOnInit: bash -c 'rpicam-vid -t 0 --camera 0 --nopreview --codec yuv420 --width 1280 --height 720 --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH'
runOnInitRestart: yes
cam2:
runOnInit: bash -c 'rpicam-vid -t 0 --camera 1 --nopreview --codec yuv420 --width 1280 --height 720 --inline --listen -o - | ffmpeg -f rawvideo -pix_fmt yuv420p -s:v 1280x720 -i /dev/stdin -c:v libx264 -preset ultrafast -tune zerolatency -f rtsp rtsp://localhost:$RTSP_PORT/$MTX_PATH'
runOnInitRestart: yes
```
also change rtspAddress: :8554\
to rtspAddress: :8900\
Otherwise there is a conflict with frigate.
With this, you should be able to start mediamtx.
```zsh
./mediamtx
```
If there is no error, you can verify your stream through vlc under rtsp://airaspi.local:8900/cam1 (default would be 8554, but we changed it in the config file)
### Current Status
I get working streams from both cameras, sending them out at 30fps at 720p.
frigate, however limits the display fps to 5, which is depressing to watch, especially since the tpu doesnt even break a little sweat.
Frigate claime that the TPU is good for up to 10 cameras, so there is headroom.
The stram is completely errant and drops frames left and right. I have sometimes seen detect fps of 0.2, but the TPU speed should definitely not be the bottleneck here. Maybe attach the cameras to a separate device and stream from there?
The biggest issue here is that the google folx seems to have abandoned the coral, even though they just released a new piece of hardware for it.
Their most RECENT python build is 3.9.
Specifically, pycoral seems to be the problem there. without a decent update, I will be confined to debian 10, with python 3.7.3.
That sucks.
There are custom wheels, but nothing that seems plug and play.
About the rest of this setup:
The decision to go for m.2 E key to save money, instead of spending more on the usb version was a huge mistake.
Please do yourself a favor and spend the extra 40 bucks.
Technically, its probably faster and better with continuous operation, but i have yet to feel the benefit of that.
### TODOs
- add images and screenshots to the build log
- Check whether vdo.ninja is a viable way to add mobile streams. then Smartphone stream evaluation would be on the horizon.
- Bother the mediamtx makers about the libcamera bump, so we can get rid of the rpicam-vid hack.
I suspect there is quirte a lot of performance lost there.
- tweak the frigate config to get snapshots and maybe build an image / video database to later train a custom model.
- worry about attaching an external ssd and saving the video files on it.
- find a way to export the landmark points from frigate. maybe send them via osc like in pose2art?
- find a different hat that lets me access the other TPU? I have the dual version, but can currently only acces 1 of the 2 TPUs due to hardware restrictions.

View file

@ -0,0 +1,43 @@
+++
title: "Local Diffusion"
excerpt: "Empower your own Stable Diffusion Generation: InKüLe supported student workshop: Local Diffusion by Aron Petau"
date: 2024-04-11 14:39:27 +0100
author: "Aron Petau"
header:
overlay_image : assets/images/local-diffusion/local-diffusion.png
teaser : assets/images/local-diffusion/local-diffusion.png
overlay_filter : 0.5
credit : Aron Petau
tags:
- inküle
- university of the arts berlin
- Workshop
- Stable Diffusion
- Local Computing
- comfyui
- automatic1111
- diffusionbee
+++
## Local Diffusion
[The official call for the Workshop](https://www.udk-berlin.de/universitaet/online-lehre-an-der-universitaet-der-kuenste-berlin/inkuele/11-april-24-aron-stable-diffusion/)
Is it possible to create a graphic novel with generative A.I.?
What does it mean to use these emerging media in collaboration with others?
And why does their local and offline application matter?
With AI becoming more and more democratised and GPT-like Structures increasingly integrated into everyday life, the black-box notion of the mysterious all-powerful Intelligence hinders insightful and effective usage of emerging tools. One particularly hands-on example is AI generated images. Within the proposed Workshop, we will dive into Explainable AI, explore Stable Diffusion, and most importantly, understand the most important parameters within it. We want to steer outcomes in a deliberate manner. Emphasis here is on open and accessible technology, to increase user agency and make techno-social dependencies and power relations visible.
Empower yourself against readymade technology!
Do not let others decide on what your best practices are. Get involved in the modification of the algorithm and get surprised by endless creative possibilities. Through creating a short graphic novel with 4-8 panels, participants will be able to utilise multiple flavours of the Stable Diffusion algorithm, and will have a non-mathematical understanding of the parameters and their effects on the output within some common GUIs. They will be able to apply several post-processing techniques to their generated images, such as upscaling, masking, inpainting and pose redrawing. Further, participants will be able to understand the structure of a good text prompt, be able to utilise online reference databases and manipulate parameters and directives of the Image to optimise desired qualities. Participants will also be introduced to ControlNet, enabling them to direct Pose and Image composition in detail.
## Workshop Evaluation
Over the course of 3 hours, I gave an introductory workshop in local stable diffusion processing and introduced participants to the server available to UdK Students for fast remote computation that circumvents the unethicality of continuously using a proprietary cloud service for similar outputs. There is not much we can do on the data production side and many ethical dilemmas surrounding digital colonialism remain, but local computation takes one step towards a critical and transparent use of AI tools by Artists.
The Workshop format was rathert open and experimental, which was welcomed by the participants and they tried the collages enthusiastically. We also had a refreshing discussion on different positions regarding the ethicalities and whether a complete block of these tools is called for and feasible.
I am looking forward to round 2 with the next iteration, where we are definitely diving deeper into the depths of comfyui, an interface that i absolutely adore, while its power also terrifies me sometimes.

View file

@ -0,0 +1,132 @@
+++
title: "Echoing Dimensions"
excerpt: "An interactive audiovisual installation."
date: 2024-04-25 14:39:27 +0100
authors:
- Aron Petau
- Joel Tenenberg
header:
overlay_image : assets/images/echoing_dimensions/Echoing Dimensions-4.jpg
teaser : assets/images/echoing_dimensions/Echoing Dimensions-4.jpg
overlay_filter : 0.5
credit : Aron Petau
tags:
- university of the arts berlin
- university
- studierendenwerk
- exhibition
- installation
- touchdesigner
- micropython
- raspberry pi pico
- ultrasonic sensor
- tts
- radio
- fm
- radio-art
- kinect
- pointcloud
- llm
gallery:
- url: assets/images/echoing_dimensions/Echoing Dimensions-4.jpg
image_path: assets/images/echoing_dimensions/Echoing Dimensions-4.jpg
title: "The FM Transmitter"
- url: assets/images/echoing_dimensions/Echoing Dimensions-1.jpg
image_path: assets/images/echoing_dimensions/Echoing Dimensions-1.jpg
title: "Video Output with Touchdesigner"
- url: assets/images/echoing_dimensions/Echoing Dimensions-2.jpg
image_path: assets/images/echoing_dimensions/Echoing Dimensions-2.jpg
title: "One of the Radio Stations"
- url: assets/images/echoing_dimensions/Echoing Dimensions-7.jpg
image_path: assets/images/echoing_dimensions/Echoing Dimensions-7.jpg
title: "The Diagram"
- url: assets/images/echoing_dimensions/Echoing Dimensions-13.jpg
image_path: assets/images/echoing_dimensions/Echoing Dimensions-13.jpg
title: "The Network Spy"
- url: assets/images/echoing_dimensions/IMG_1709.jpeg
image_path: assets/images/echoing_dimensions/IMG_1709.jpeg
title: "The Exhibition Setup"
+++
## Echoing Dimensions
## The space
[Kunstraum Potsdamer Straße](https://www.stw.berlin/kultur/kunstraum/kunsträume/)
The exhibition is situated in an old parking garage, owned and operated by the studierendenwerk Berlin. The space is a large, open room with a rather low ceiling and a concrete floor. Several Nooks and separees can create intimate experiences within the space. The space is not heated and has no windows. The walls are made of concrete and the ceiling is made of concrete.
As a group, we are 12 people, each with amazing projects surrounding audiovisual installations:
- Özcan Ertek (UdK)
- Jung Hsu (UdK)
- Nerya Shohat Silberberg (UdK)
- Ivana Papic (UdK)
- Aliaksandra Yakubouskaya (UdK)
- Aron Petau (UdK, TU Berlin)
- Joel Rimon Tenenberg (UdK, TU Berlin)
- Bill Hartenstein (UdK)
- Fang Tsai (UdK)
- Marcel Heise (UdK)
- Lukas Esser & Juan Pablo Gaviria Bedoya (UdK)
## The Idea
We will be exibiting our Radio Project,
[aethercomms](/aethercomms/)
which resulted from our previous inquiries into cables and radio spaces during the Studio Course.
## Build Log
### 2024-01-25
First Time seeing the Space:
{% include video id="UaVTcUXDMKA" provider="youtube" %}
### 2024-02-01
Signing Contract
### 2024-02-08
The Collective Exibition Text:
>Sound, as a fundamental element of everyday experience, envelopes us in the cacophony of city life - car horns, the chatter of pedestrians, the chirping of birds, the rustle of leaves in the wind, notifications, alarms and the constant hum of radio waves, signals and frequencies. These sounds, together make up the noise of our life, often pass by, fleeting and unnoticed.
The engagement with sound through active listening holds the potential to process the experience of the self and its surroundings. This is the idea of “Echoing Dimensions”: Once you engage with something, it gives back to you: Whether it is the rhythmic cadence of a heartbeat, a flowing symphony of urban activity or the hoofbeats of a running horse, minds and bodies construct and rebuild scenes and narratives while sensing and processing the sounds that surround them, that pass next and through them.
The exhibition "Echoing Dimensions" takes place at Kunstraum Potsdamer Straße gallerys underground space and exhibits artworks by 12 Berlin based artists, who investigate in their artistic practice intentional listening using sound, video and installation, and invites to navigate attentiveness by participatory exploration. Each artwork in the exhibition revolves around different themes in which historical ideas resonate, political-personal narratives are being re-conceptualized and cultural perspectives are examined. The exhibition's common thread lies in its interest into the complexities of auditory perception, inviting viewers to consider the ways in which sound shapes our memories, influences our culture, and challenges our understanding of space and power dynamics.
### 2024-02-15
Working TD Prototype. We collect the pointcloud information through a kinect azure and sorting the output of the device turned out to be quite tricky.
### 2024-03-01
Initial live testing on the finalized hardware. We decided to use a tiny Intel NUC to run both touchdesigner, the LLM, and audio synthesis.
Not expected at all: The audio synthesis was actually the hardest, since there was no available internet in the exhibition space and all sleek modern solutions seem to rely on cloud services to generate audio from text.
Here, the tiny NUC really bit us: it took almost 15 seconds to generate a single paragraph of spoken words, even when usin quite small synthesizer models for it.
Lesson learned: Next time give it more oomph.
I seriously wonder though why there wouldn't be better TTS systems around. Isnt that quite the essential accessibility feature? We ended up using coquiTTS, which is appearently out of business entirely.
### 2024-04-05
We became part of [sellerie weekend](https://www.sellerie-weekend.de)!
![Sellerie Weekend Poster](/assets/images/echoing_dimensions/sellerie_weekend.png)
This is a collection of Gallery Spaces and Collectives that provide a fresher and more counter-cultural perspective on the Gallery Weekend.
It quite helped our online visibility and filled out the entire space on the Opening.
### A look inside
{% include video id="qVhhv5Vbh8I" provider="youtube" %}
{% include video id="oMYx8Sjk6Zs" provider="youtube" %}
### The Final Audiovisual Setup
{% include gallery %}

View file

@ -0,0 +1,51 @@
+++
title: "Sferics"
excerpt: "On a hunt for the Voice of the out there"
date: 2023-06-20 14:39:27 +0100
last_modified_at: 2023-06-20 14:39:27 +0100
author: "Aron Petau"
tags:
- fm
- radio
- antenna
- sferics
- lightning
- geosensing
- electronics
- electromagnetism
- university of the arts berlin
+++
## What the hell are Sferics?
>A radio atmospheric signal or sferic (sometimes also spelled "spheric") is a broadband electromagnetic impulse that occurs as a result of natural atmospheric lightning discharges. Sferics may propagate from their lightning source without major attenuation in the Earthionosphere waveguide, and can be received thousands of kilometres from their source.
- [Wikipedia](https://en.wikipedia.org/wiki/Radio_atmospheric_signal)
## Why catch them?
[Microsferics](microsferics.com) is a nice reference Project, which is a network of Sferics antennas, which are used to detect lightning strikes. Through triangulation not unlike the Maths happening in GPS, the (more or less) exact location of the strike can be determined. This is useful for weather prediction, but also for the detection of forest fires, which are often caused by lightning strikes.
Because the Frequency of the Sferics is, when converted to audio, still in the audible range, it is possible to listen to the strikes. This usually sounds a bit like a crackling noise, but can also be quite melodic. I was a bit reminded by a Geiger Counter.
Sferics are in the VLF (Very Low Frequency) range, sitting roughly at 10kHz, which is a bit of a problem for most radios, as they are not designed to pick up such low frequencies. This is why we built our own antenna.
At 10kHz, we are talking about insanely large waves. a single wavelength there is roughly 30 Kilometers. This is why the antenna needs to be quite large. A special property of waves this large is, that they get easily reflected by the Ionosphere and the Earth's surface. Effectively, a wave like this can bounce around the globe several times before it is absorbed by the ground. This is why we can pick up Sferics from all over the world and even listen to Australian Lightning strikes. Of course, without the maths, we cannot attribute directions, but the so called "Tweeks" we picked up, usually come from at least 2000km distance.
## The Build
We built several so-called "Long-Loop" antennas, which are essentially a coil of wire with a capacitor at the end. Further, a specific balun is needed, depending on the length of the wire. this can then directly output an electric signal on an XLR cable.
Loosely based on instructions from [Calvin R. Graf](https://archive.org/details/exploringlightra00graf), We built a 26m long antenna, looped several times around a wooden frame.
## The Result
We have several hour-long recordings of the Sferics, which we are currently investigating for further potential.
Have a listen to a recording of the Sferics here:
{% include video id="2YYPg_K3dI4" provider="youtube" %}
As you can hear, there is quite a bit of 60 hz ground buzz in the recording. This is either due to the fact that the antenna was not properly grounded or we simply were still too close to the bustling city.
I think it is already surprising that we got such a clear impression so close to Berlin. Let's see what we can get in the countryside!

View file

@ -0,0 +1,64 @@
+++
title: "Käsewerkstatt"
excerpt: "Building a Food trailer and selling my first Food"
date: 2024-07-5 14:39:27 +0100
author: "Aron Petau"
header:
overlay_image: /assets/images/käsewerkstatt/cheese.jpeg
overlay_filter: 0.5
teaser: /assets/images/käsewerkstatt/cheese.jpeg
credit: "Aron Petau"
gallery:
- url: /assets/images/käsewerkstatt/cheese.jpeg
image_path: /assets/images/käsewerkstatt/cheese.jpeg
title: "Scraping the cheese"
- url: /assets/images/käsewerkstatt/combo_serve.jpeg
image_path: /assets/images/käsewerkstatt/combo_serve.jpeg
title: "The Recommended Combo from the Käsewerkstatt"
- url: assets/images/käsewerkstatt/logo.jpeg
image_path: /assets/images/käsewerkstatt/logo.jpeg
title: "The Logo of the Käsewerkstatt, done with the Shaper Origin"
tags:
- food truck
- cars
- bruschetta
- raclette
- workshop
+++
## Enter the Käsewerkstatt
One day earlier this year I woke up and realized I had a space problem.
I was trying to build out a workshop and tackle ever more advanced and dusty plastic and woodworking projects and after another small run in with my girlfriend after I had repeatedly crossed the "No-Sanding-and-Linseed-Oiling-Policy" in our Living Room, it was time to do something about it.
I am based in Berlin right now and the housing market is going completely haywire over here ( quick shoutout in solidarity with [Deutsche Wohnen und Co enteignen](https://dwenteignen.de/)).
End of the song: I won't be able to afford to rent a small workshop anywhere near berlin anytime soon. As you will notice in some other projects, I am quite opposed to the Idea that it should be considered normal to park ones car in the middle of the city on public spaces, for example [Autoimmunitaet](/autoimmunitaet), [Commoning Cars](/commoning-cars) or [Dreams of Cars](/dreams-of-cars).
So, the idea was born, to regain that space as habitable zone, taking back usable space from parked cars.
I was gonna install a mobile workshop within a trailer.
Ideally, the trailer should be lockable and have enough standing and working space.
As it turns out, Food Trailers fulfill these criteria quite nicely. So I got out on a quest, finding the cheapest food trailer available in germany.
6 weeks later, I found it near munich, got it and started immediately renovating it.
Due to developments in parallel, I was already invited to sell food and have the ofgficial premiere at the Bergfest, a Weekend Format in Brandenburg an der Havel, initiated and organized by [Zirkus Creativo](https://zirkus-creativo.de). Many thanks for the invitation here again!
So on it went, I spent some afternoons renovating and outfitting the trailer, and did my first ever shopping at Metro, a local B2B Foodstuffs Market.
Meanwhile, I got into all the paperwork and did all the necessary instructional courses and certificates.
The first food I wanted to sell was Raclette on fresh bread, a swiss dish that is quite popular in germany.
For the future, the trailer is supposed to tend more towards vegan dishes, as a first tryout I also sold a bruschetta combo. This turned out great, since the weather was quite hot and the bruschetta was a nice and light snack, while I could use the same type of bread for the raclette.
![The finished Trailer](/assets/images/käsewerkstatt/trailer.jpeg)
The event itself was great, and, in part at least, started paying off the trailer.
{% include gallery caption="Some photos of the opeing event @ Bergfest in Brandenburg an der Havel" %}
We encountered lots of positive feedback and I am looking forward to the next event. So, in case you want to have a foodtruck at your event, hit me up!
Contact me at: [käsewerkstatt@petau.net](mailto:käsewerkstatt@petau.net)
{: .notice--info}

View file

@ -0,0 +1,66 @@
+++
title: "Master's Thesis"
date: 2025-04-24 14:39:27 +0100
author: "Aron Petau"
excerpt: "Human - Waste: A thesis examining interactive workshops"
header:
teaser: "/assets/images/masterthesis/puzzle.jpeg"
overlay_image: "/assets/images/masterthesis/puzzle.jpeg"
overlay_filter : 0.5
credit : "Aron Petau"
tags:
- plastics-as-waste
- plastics-as-material
- recycling practices
- object-value
- re-valuation
- maker-education
- Materialübung
- hacking
- archival practices
- collaborative recycling
- liminality
- matter
- scavenger-gaze
- transmattering
- peer-learning
- skillsharing in workshops
- thesis
- university of the arts berlin
- technische universität berlin
- university
+++
## Master's Thesis: Human - Waste
Plastics offer significant material benefits, such as durability and versatility, yet their
widespread use has led to severe environmental pollution and waste management
challenges. This thesis develops alternative concepts for collaborative participation in
recycling processes by examining existing waste management systems. Exploring the
historical and material context of plastics, it investigates the role of making and hacking as
transformative practices in waste revaluation. Drawing on theories from Discard Studies,
Material Ecocriticism, and Valuation Studies, it applies methods to examine human-waste
relationships and the shifting perception of objects between value and non-value. Practical
investigations, including workshop-based experiments with polymer identification and
machine-based interventions, provide hands-on insights into the material properties of
discarded plastics. These experiments reveal their epistemic potential, leading to the
introduction of novel archiving practices and knowledge structures that form an integrated
methodology for artistic research and practice. Inspired by the Materialstudien of the
Bauhaus Vorkurs, the workshop not only explores material engagement but also offers new
insights for educational science, advocating for peer-learning scenarios. Through these
approaches, this research fosters a socially transformative relationship with waste,
emphasizing participation, design, and speculative material reuse. Findings are evaluated
through participant feedback and workshop outcomes, contributing to a broader discussion
on waste as both a challenge and an opportunity for sustainable futures and a material
reality of the human experience.
<embed
src="/assets/documents/Human_Waste_MA_Aron_Petau.pdf"
type="application/pdf"
style="width: 100%; height: 80vh; margin: 0 auto; display: block; border: 1px solid #ccc;" />
[See the image archive yourself](https://pinry.petau.net){: .btn .btn--large}
[See the archive graph yourself](https://archive.petau.net/#/graph){: .btn .btn--large}
[Find the complete Repo on Forgejo](https://forgejo.petau.net/aron/machine_archivist.git){: .btn .btn--large}