Using Google Cloud Build to build and deploy your frontend NodeJS application is well documented in Building Node.js applications and Deploying to Firebase. How ever all these, or at least a majority of them, are based on Linux based development systems. If your on a windows machine you can run into a very specific problem with the firebase build step and docker container.
standard_init_linux.go:211: exec user process caused "no such file or directory"
When built by following the guide on windows you will end up with the error above. This is because windows uses CRLF instead of LF as end of line. The simplest way around this is to use WSL Windows Subsystem for Linux to follow the guide and the build container will work just fine.
A proper SSL certificate on the Unifi Controller is more of a cosmetic fix then a security one. The self signed certificate is fine from a security standpoint but enjoying when accessing the controller. I run my controller in a docker container on my swarm and have Traefik for ingress and SSL. Read more about my Traefik setup here.
The setup of Unifi Controller behind any reverse proxy is easy enough. Specially if you have no external access to consider. Still the controller best live in the same physical network as the equipment in my opinion.Read More
A reverse proxy is used to distribute the traffic over a scalable application running in several containers. This is needed since you can’t publish the port for all the containers. Traefik is a docker aware reverse proxy that can route and distribute all the incoming traffic to the correct containers. We are going to solve a different problem with this. We are giving all our virtual appliances with web UI:s simple URLs and HTTPS security.Read More
It all started with a central database for my Kodi media players. Then I migrated the setup from a dedicated Raspberry Pi to running MySQL on docker swarm. That gave me much more stability and availability for the solution but it still needed backup. The setup ended up with limiting the docker container to a specific node and, via a cron job, executing the backup script on the docker container. You ca read more about that setup in my Kodi central db backup post. This setup was not optimal since I was after more stability and availability by running the application containerized on a swarm. I still had that one single point of failure that I didn’t want! There is a better way, docker stack with the backup containerized!Read More
I’m virtualizing several test nodes on a proxmox server in my homelab. Since proxmox doesn’t use thin provisioning for disks I’m a bit cheap with the diskspace. My 6 node docker cluster was awarded 8Gb of root disk each and now I had less then 100Mb free. There are three steps to extend the root disk of Ubuntu.
Extend partition – Physical Volume (PV)
Since this is a virtual server I first extended the virtual disk in proxmox. That is hot plug but Ubuntu doesn’t extend on it’s own. Then we need to extend the partition on disk. To do this I downloaded the Gparted Live CD iso and mounted it to my virtual machine. This will work just as well on a physical machine booting from a CD or a USB stick. Then just extend the partition to the full size of the disk. Then apply and reboot back into Ubuntu.
Extend Volume Group (VG) & Logical Volume (LV)
When the server is back up we can extend the volume group. The volume group is an abstracted amount of drive space that can be split between multiple drives/devices. The logical volume is the actual space that Ubuntu “sees” in terms of filesystems etc.
$sudo lvm lvm> lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv lvm> exit
Then we need to extend the filesystem to take up all the space.
$ sudo resize2fs /dev/ubuntu-vg/ubuntu-lv resize2fs 1.44.1 (21-Okt-2020) Filesystem at /dev/ubuntu-vg/ubuntu-lv is mounted on /; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 58 The filesystem on /dev/ubuntu-vg/ubuntu-lv is now 120784896 (4k) blocks long.
Now you have an extended root partition and plenty of space.
I’m currently converting a few Google App Engine projects from Python 2.7 to Python 3. This includes a bunch of changes to the the code since Google is moving away from the built in Google App Engine classes. During the first few steps of converting the app you start changing out the dependencies still on Python 2.7. During the first step of the guide Overview of migrating bundled App Engine Services I ran into trouble.Read More
The Unifi series from Ubuquiti has great features for centralized management of larger networks. There are however many things not supported in the Cloud Key UI that still can be configured. During the last deployment, we had two additional needs we couldn’t accomplish from the Cloud Key itself.
- Multiple WAN addresses – we needed to configure more than one fixed IP on the WAN interface.
- IP-Sec SHA256 hash – one of our site-to-site VPN connections required SHA256 as the hash algorithm.
There are several guides on how to accomplish this but they are scattered all over the place. This is a complete writeup of how to accomplish this and provision the changes to the devices.Read More
A basic docker swarm comes with two options for storage bind or volume. Both are persistent storage bot only on that node. So if the node fails and the swarm starts the task on a new node the data is lost. There are a few options to mitigate this with storage plugins for redundant storage. For my small raspberry pi docker swarm, I will use replicated storage via GlusterFS and binds.Read More
Rclone is a powerful tool for syncing data to and from cloud storage. For everyday usability a graphical interface is nice. For the use case of encrypted offsite backup, a graphical user interface to access single files for restore makes it so much more usable. As an example, in this article, I will use the case of encrypted backup to Google drive. In that case, we encrypt all the data including filenames before uploading the data. That prevents us from browsing the backup on Google drive to retrieve a specific file that we need to restore. For this purpose, we could use the Rclone CLI but it will be much easier with a nice web UI.Read More
Having a good set and forget, but really set and double-check every now and then, strategy for your backups is important. Backups need to be automated to get done but also need to be tested to make sure that you can recover files when needed. This article will look at a home or small company setup doing large scale backups on a budget.Read More