Export Firestore to dev environment

Sooner or later you need production data in your development environment to test or develop a more rich UI experience based on real life data. There are a few ways of doing this, but this is the easiest especially if your on a windows machine.

You probably already have a persistent data in your dev environment, if not just run the emulators with the following command.

firebase emulators:start --import=./dev_data --export-on-exit

This will import any data you have in the Firestore and Authentication emulators and then import it on the next run. Don’t forget to add ./dev_data to your .gitignore.

Download the data

The data download need to go via a storage bucket. You will probably already have a few buckets if you published your project at any point. I also assume that you have your firebase and gcloud utils installed and authenticated. If not install them and run:

firebase login
gcloud auth login

Then export the data to the default storage bucket. If your on windows do not use the Cloud Firestore Import/Export page in the console. It will add special characters in the filename preventing you from downloading the data. Use the gcloud tool instead.

gcloud firestore export gs://your-project-name.appspot.com/your-choosen-folder-name

Depending on the size of your Firestore this can take some time. When it’s done we can download the data with gsutil.

gsutil -m cp -r gs://your-project-name.appspot.com/your-choosen-folder-name .

The last . means that the data will be downloaded to the current working folder. I put the data in a folder named prod_data inside my project folder. This folder is also added to the .gitignore file to prevent the data from leaving my dev environment. Then I can run my emulators with the downloaded data.

firebase emulators:start --import=./prod_data --export-on-exit

Conclusion

You can easily automate this procedure in a simple powershell script to grab data whenever. Always be careful when working with production data in your dev environment. Make sure that you don’t leak any data via git for example. Also make sure you keep the data encrypted on your workstation and delete it as soon as you don’t need it anymore.

More reading: https://firebase.google.com/docs/firestore/manage-data/export-import

Raspberry Pi pains of the past

Don’t get me wrong, I love my Raspberry Pi’s and I have spent a lot of time on both successful and less successful projects. A lot has happened since 2012 when they first came out, more powerful hardware and much better software support. In the later years I have done everything from home automation projects, VPN gateways and Docker clusters. I totally forgot the pains of reaching to high on the old versions and running straight into a brick wall with limitations for what I tried to do. But now I remember!

Read More

Google Maps only for the city?

Before the Covid-19 pandemic I used to travel a lot. Using Google maps to find the local hotspots and trying to avoid the tourist traps with Google Maps Reviews. So I started writing my own reviews and adding pictures. Besides being a traveler, programmer, tech nut and many other things I’m also a hiker and I hike a lot. I have tons of photos and GPS tracks of official and unofficial hiking trails, shelters and firepits that I would like to share. So a hobby project idea was born.

Read More

OctoScreen on Adafruit PiTFT 320×240

Finally got around to moving my Ender 5 Plus, and it’s load power supply, out of my office. So more then ever I want OctoPrint up and running for remote monitoring of my setup. I had a Raspberry Pi 4 unused and found an old Adafruit PiTFT 320×240 touch screen in the scrap bin. Getting OctoPrint up and running is easier then ever with the pre-built image for Raspberry Pi. The old touch screen was another story so here is my final notes on how to get it up and running. I assume that my audience already have OctoPrint up and running and just want the screen to work.

Preparations

So first we need to upgrade everything to the latest version. Connect to the OctoPi via SSH, the default usernam/password is pi/raspberry.

sudo apt-get update && sudo apt-get upgrade

Then make sure that OctoPrint is up to date from the OctoPrint web UI. Then we can install the screen with Adafruits automation script. More information here: https://learn.adafruit.com/adafruit-pitft-28-inch-resistive-touchscreen-display-raspberry-pi/easy-install-2

cd ~
sudo apt-get install -y git python3-pip
sudo pip3 install --upgrade adafruit-python-shell click==7.0
git clone https://github.com/adafruit/Raspberry-Pi-Installer-Scripts.git
cd Raspberry-Pi-Installer-Scripts
sudo python3 adafruit-pitft.py

Go for the option with HDMI mirroring not console! When setup is completed correctly you should have the login screen showing after reboot.

OctoScreen installation and setup

Installation of OctoScreen is pretty straight forward. It’s uses an X11 application instead of a web browser like TouchUI and similar. More information on the OctoScreen install: https://github.com/Z-Bolt/OctoScreen

sudo apt-get install libgtk-3-0 xserver-xorg xinit x11-xserver-utils
wget https://github.com/Z-Bolt/OctoScreen/releases/download/v2.7.2/octoscreen_2.7.2_armhf.deb
sudo dpkg -i octoscreen_2.7.2_armhf.deb

After the installation completes and a reboot one arm of the OctoPrint octopus becomes visible on the screen. We need to change the resolution to match the screen. So edit the OctoScreen config file and change the OCTOSCREEN_RESOLUTION value to 320×240.

sudo nano /etc/octoscreen/config

Screen settings

The screen still doesn’t look as expected and need some additional tweaking. First change the resolution settings for the screen in the boot config.

sudo nano /boot/config.txt

Un-comment and set the following values:

framebuffer_width=660
framebuffer_height=390

At the bottom change the HDMI settings to:

hdmi_cvt=660 390 60 1 0 0

After a reboot the screen should look fine but the touch screen will not be aligned and unusable. Both the X and Y axis seems to be inverted but not in a fully logical way. This also has to do with the change of the resolution. Install xtcal to calibrate the touchscreen for X11 use.

cd ~
sudo apt-get install libxaw7-dev libxxf86vm-dev libxaw7-dev libxft-dev
git clone https://github.com/KurtJacobson/xtcal
cd xtcal
make

Now we run the calibration with the same values we used for the framebuffer settings to get an accurate calibration.

pi@octopi:~ $ DISPLAY=:0.0 xtcal/xtcal -geometry 660x390
fullscreen not supported
Calibrate by issuing the command below, substituting with the name found using xinput list.
xinput set-prop 'Coordinate Transformation Matrix' -0.003810 -1.123983 1.038378 1.124421 0.006780 -0.076729 0 0 1

The output we get at the bottom is the actual calibration information needed to get it all to work. In this case -0.003810 -1.123983 1.038378 1.124421 0.006780 -0.076729 0 0 1. Then we build an transformation matrix to offset and calibrate the touch screen. Edit sudo nano /usr/share/X11/xorg.conf.d/20-calibration.conf and add the information below. You can see where the calibration output is added into this. Then reboot the Pi again and it should all work just fine. You can read more about the calibration here: https://learn.adafruit.com/adafruit-pitft-28-inch-resistive-touchscreen-display-raspberry-pi/resistive-touchscreen-manual-install-calibrate

Section "InputClass"
        Identifier "STMPE Touchscreen Calibration"
        MatchProduct "stmpe"
        MatchDevicePath "/dev/input/event*"
        Driver "libinput"
        Option "TransformationMatrix" "-0.003810 -1.123983 1.038378 1.124421 0.006780 -0.076729 0 0 1"
EndSection

Conclusion

This will work just fine and you can use most of the menus. Some of them will not fit properly on the screen and you will not be able to get back to the main screen. If that happens and you don’t want to reboot the Pi you can issue sudo service octoscreen restart over ssh.

So this is somewhat useful but you can see that this isn’t designed for such a small screen. Bur if you ended up on this post you’re probably in the same situation I was and just want it to work!

Firebase: Unit-testing Firestore rules

Developing serverless web applications on Firebase is great. Quick and easy for new project ideas. The most important part of a Firebase deploy is the Firetore rules since the client speaks directly with the database. Todd Kerpelman at Firebase made a couple of really great videos on unit testing the security rules which is really good to get started. Once you have them running you really want to put them into your build chain and make sure they are executed before each deploy.

Read More

GCB Firebase deploy error

Using Google Cloud Build to build and deploy your frontend NodeJS application is well documented in Building Node.js applications and Deploying to Firebase. How ever all these, or at least a majority of them, are based on Linux based development systems. If your on a windows machine you can run into a very specific problem with the firebase build step and docker container.

gcr.io/monitorbeat20/firebase:latest
standard_init_linux.go:211: exec user process caused "no such file or directory"

When built by following the guide on windows you will end up with the error above. This is because windows uses CRLF instead of LF as end of line. The simplest way around this is to use WSL Windows Subsystem for Linux to follow the guide and the build container will work just fine.

Unifi Controller behind Traefik

A proper SSL certificate on the Unifi Controller is more of a cosmetic fix then a security one. The self signed certificate is fine from a security standpoint but enjoying when accessing the controller. I run my controller in a docker container on my swarm and have Traefik for ingress and SSL. Read more about my Traefik setup here.

The setup of Unifi Controller behind any reverse proxy is easy enough. Specially if you have no external access to consider. Still the controller best live in the same physical network as the equipment in my opinion.

Read More

Traefik reverse proxy with docker swarm

A reverse proxy is used to distribute the traffic over a scalable application running in several containers. This is needed since you can’t publish the port for all the containers. Traefik is a docker aware reverse proxy that can route and distribute all the incoming traffic to the correct containers. We are going to solve a different problem with this. We are giving all our virtual appliances with web UI:s simple URLs and HTTPS security.

Read More

MySQL on Docker Swarm revisited

It all started with a central database for my Kodi media players. Then I migrated the setup from a dedicated Raspberry Pi to running MySQL on docker swarm. That gave me much more stability and availability for the solution but it still needed backup. The setup ended up with limiting the docker container to a specific node and, via a cron job, executing the backup script on the docker container. You ca read more about that setup in my Kodi central db backup post. This setup was not optimal since I was after more stability and availability by running the application containerized on a swarm. I still had that one single point of failure that I didn’t want! There is a better way, docker stack with the backup containerized!

Read More

Extend Ubuntu Server Partition

I’m virtualizing several test nodes on a proxmox server in my homelab. Since proxmox doesn’t use thin provisioning for disks I’m a bit cheap with the diskspace. My 6 node docker cluster was awarded 8Gb of root disk each and now I had less then 100Mb free. There are three steps to extend the root disk of Ubuntu.

Extend partition – Physical Volume (PV)

Since this is a virtual server I first extended the virtual disk in proxmox. That is hot plug but Ubuntu doesn’t extend on it’s own. Then we need to extend the partition on disk. To do this I downloaded the Gparted Live CD iso and mounted it to my virtual machine. This will work just as well on a physical machine booting from a CD or a USB stick. Then just extend the partition to the full size of the disk. Then apply and reboot back into Ubuntu.

Extend Volume Group (VG) & Logical Volume (LV)

When the server is back up we can extend the volume group. The volume group is an abstracted amount of drive space that can be split between multiple drives/devices. The logical volume is the actual space that Ubuntu “sees” in terms of filesystems etc.

$sudo lvm
lvm> lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv
lvm> exit

Extend filesystem

Then we need to extend the filesystem to take up all the space.

$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv
resize2fs 1.44.1 (21-Okt-2020)
Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 58
The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 120784896 (4k) blocks long.

Now you have an extended root partition and plenty of space.