Tyrel's Blog

Code, Flying, Tech, Automation

Sep 26, 2023

Which which is which?

I had a bit of a "Tyrel you know nothing" moment today with some commandline tooling.

I have been an avid user of ZSH for a decade now, but recently I tried to swap to fish shell. Along the years, I've maintained a lot of different iterations of dotfiles, and shell aliases/functions. I was talking to a friend [citation needed] about updating from exa to eza and then noticed I didn't have my aliases loaded, so I was still using ls directly, as I have alias ls="exa -lhFgxUm --git --time-style long-iso --group-directories-first" in my .shell_aliases file.

I did this by showing the following output:

$ which ls
/usr/bin/ls

Because I expected it to show me which alias was being pointed to by ls.

My friend pointed out that "Which doesn't show aliases, it only points to files" to which I replied along the lines of "What? No way, I've used which to show me aliases and functions loads of times."

And promptly sent a screenshot of my system NOT showing that for other aliases I have set up. Things then got conversational and me being confused, to the point of me questioning if "Had I ever successfully done that? Maybe my macbook is set up differrently" and went and grabbed that.

Friend then looked at the man page for which, and noticed that there's the --read-alias and --read-functions flags on which, and I didn't have those set. I then swapped over to bash "Maybe it's a bash thing only? I'm using Fish".

Nope, still nothing! Then went to google, and it turns out that ZSH is what has this setup by default. Thank you "Althorion" from Stackoverflow for settling my "Yes you've done this before" confusion.

It turns out that ZSH's which is equivalent to the ZSH shell built-in whence -c which shows aliases and functions.

After running /usr/bin/zsh and sourcing my aliases (I don't have a zshrc file anymore, I need to set that back up), I was able to settle my fears and prove to myself that I wasn't making things up. There is a which which shows you which aliases you have set up, which is default for ZSH.

$ which ls
ls: aliased to exa -lhFgxUm --git --time-style long-iso --group-directories-first
 · · ·  linux  macos  zsh

Jan 10, 2023

Dotfiles - My 2022 Way

New Year's eve eve, my main portable computer crashed. Rebooting to Safe mode, I could mount this MacBook's hard drive long enough to SCP the files over the network to my server, but I had to start that over twice because it fell asleep. I don't have access to rsync in the "Network Recovery Mode" it seems - maybe I should look to see if next time I can install things, it's moot now.

I spent all January 1st evening working on learning how Nix works. Of course, I started with Nix on macOS (intel at least) so I had to also learn how nix-darwin works. I have my dotfiles set up to use Nix now, rather than an INSTALL.sh file that just sets a bunch of symlinks.

I played around for a litle bit with different structures, but what I ended up with by the end of the weekend was two bash scripts (still working on makefile, env vars are being funky) one for each operating system rebuild-macos.sh and rebuild-ubuntu.sh. For now I'm only Nixifying one macOS system and two Ubuntu boxes. Avoiding it on my work m1 Mac laptop, as I don't want to have to deal with managing synthetic.conf and mount points on a work managed computer. No idea how JAMF and Nix will fight.

My filetree currently looks like (trimmed out a host and a bunch of files in home/)

.
├── home
│   ├── bin/
│   ├── config/
│   ├── gitconfig
│   ├── gitignore
│   ├── gpg/
│   ├── hushlogin
│   └── ssh/
├── hosts/
│   ├── _common/
│   │   ├── fonts.nix
│   │   ├── home.nix
│   │   ├── programs.nix
│   │   └── xdg.nix
│   ├── ts-tl-mbp/
│   │   ├── brew.nix
│   │   ├── default.nix
│   │   ├── flake.lock
│   │   ├── flake.nix
│   │   ├── home-manager.nix
│   │   └── home.nix
│   └── x1carbon-ubuntu/
│       ├── default.nix
│       ├── flake.lock
│       ├── flake.nix
│       ├── home-manager.nix
│       └── home.nix
├── rebuild-macos.sh
└── rebuild-ubuntu.sh

Under hosts/ as you can see, I have a brew.nix file in my macbook pro's folder. This is how I install anything in homebrew. In my flake.nix for my macos folder I am using home-manager, nix-darwin, and nixpkgs. I provide this brew.nix to my darwinConfigurations and it will install anything I put in my brew nixfile.

I also have a _common directory in my hosts, this is things that are to be installed on EVERY machine. Things such as bat, wget, fzf, fish, etc. along with common symlinks and xdg-config links. My nvim and fish configs are installed and managed this way. Rather than need to maintain a neovim config for every different system, in the nix way, I can just manage it all in _common/programs.nix.

This is not "The Standard Way" to organize things, if you want more inspiration, I took a lot from my friend Andrey's Nixfiles. I was also chatting with him a bunch during this, so I was able to get three systems up and configured in a few days. After the first ubuntu box was configured, it was super easy to manage my others.

My home/ directory is where I store my config files. My ssh public keys, my gpg public keys, my ~/.<dotfiles> and my ~/.config/<files>. This doesn't really need any explaination, but as an added benefit is I also decided to LUA-ify my nvim configs the same weekend. But that's a story for another time.

I am at this time choosing not to do NixOS - and relying on Ubuntu for managing my OS. I peeked into Andrey's files, and I really don't want to have to manage a full system configuration, drivers, etc. with Nix. Maybe for the future - when my Lenovo X1 Carbon dies and I need to reinstall that though.

 · · ·  dotfiles  macos  linux  nix  ubuntu

Jun 02, 2022

2016 Monitoring a CO2 tank in a Lab with a raspberry pi

This was written in 2017, but I found a copy again, I wanted to post it again.

The Story

For a few months last year, I lived around the block from work. I would sometimes stop in on the weekends and pick up stuff I forgot at my desk. One day the power went out at my apartment and I figure I would check in at work and see if there were any problems. I messaged our Lab Safety Manager on slack to say "hey the power went out, and I am at the office. Is there anything you'd like me to check?". He said he hadn't even gotten the alarm email/pages yet, so if I would check out in the lab and send him a picture of the CO2 tanks to make sure that nothing with the power outage compromised those. Once I had procured access to the BL2 lab on my building badge, I made my way out back and took a lovely picture of the tanks, everything was fine.

The following week, in my one on one meeting with my manager, I mentioned what happened and she and I discussed the event. It clearly isn't sustainable sending someone in any time there was a power outage if we didn't need to, but the lab equipment doesn't have any monitoring ports.

Operation Lab Cam was born. I decided to put together a prototype of a Raspberry Pi 3 with a camera module and play around with getting a way to monitor the display on the tanks. After a few months of not touching the project, I dug into it in a downtime day again. The result is now we have an automated camera box that will take a picture once a minute and display it on an auto refreshing web page. There are many professional products out there that do exactly this, but I wanted something that has the ability to be upgraded in the future.

Summary of the Technical Details

Currently the entire process is managed by one bash script, which is a little clunky, but it's livable. The implementation of the script goes a little like:

  1. Take a picture to a temporary location.
  2. Add a graphical time stamp.
  3. Copy that image to both the currently served image, and a timestamped filename backup.

The web page that serves the image is just a simple web page that shows the image, and refreshes once every thirty seconds.

The Gritty Technical Details

The program I'm using to take pictures is the raspistill program. If I had written my script to just call raspistill every time I wanted a picture taken, it would have potentially taken a lot longer to save the images. This happens because it needs to meter the image every time, which adds up. The solution is Signal mode and turning raspistill into a daemon. If you enable signal mode, any time you send a SIGUSR1 to the process, the backgrounded process will then take the image.

Instead if setting up a service with systemd, I have a small bash script. At the beginning, I run a ps aux and check if raspistill is running, if it's not, I start up a new instance of raspistill with the appropriate options and background it. The next time this script runs, it will detect that raspistill is running and be almost a no-op.

After this, I send a SIGUSR1 (kill -10) to take the picture which is then saved, un-timestamped. Next up I call imagemagick's convert on this image, I crop out the center (so I couldn't use raspistill's "-a 12" option) because all I care about is a 500x700 pixel region.

This is then copied to the image that is served by the web page, and also backed up in a directory that nginx will listen to.

Leds on a CO2 tank
 · · ·  Linux  raspberrypi

Jan 28, 2015

Too many open files

When I worked at Propel Marketing, we used to outsource static websites to a third party vendor, and then host them on our server. It was our job as developers to pull down the finished website zip file from the vendor, check it to make sure they used the proper domain name, (they didn't a lot of the time,) and make sure it actually looks nice. If these few criteria were met, we could launch the site.

Part of this process was SCPing the directory to our sites server. The sites server was where we had Apache running with every custom static site as a vhost. We would put the website in /var/www/vhosts/domain.name.here/ and then create the proper files in sites-available and sites-enabled (more on this in another entry). After that the next step was to run a checkconfig and restart Apache.

Here's where it all went wrong one day. If I can recall correctly, my boss was on vacation so he had me doing a bit of extra work and launching a few more sites than I usually do. Not only that, but we also had a deadline of the end of the month which was either the next day, or the day after. I figure I'll just setup all mine for two days, and then have some extra time the next day for other things to work on. So I started launching my sites. After each one, I would add the domain it was supposed to be at into my /etc/hosts file and make sure it worked.

I was probably half way done with my sites, and suddenly I ran into one that didn't work. I checked another one to see if maybe it was just my network being stupid and not liking my hosts file, but no, that wasn't the problem. Suddenly, EVERY SITE stopped working on this server. Panicking, I delete the symlink in sites-enabled and restart Apache. Everything works again. I then proceed to put that site aside, maybe something in the php files breaks the server, who knows, but I have other sites I can launch.

I setup the next site and the same thing happens again, no sites work. Okay, now it's time to freak out and call our sysadmin. He didn't answer his phone, so I call my boss JB. I tell him the problem and he says he will reach out to the sysadmin and see what the problem is, all the while I'm telling JB "It's not broken broken, it just doesn't work, it's not my fault" etc etc. A couple hours later, our sysadmin emails us back and says he was able to fix the problem.

It turns out, there's a hard limit to the number of files your system can have open per user, and this was set to 1000 for the www-data user. The site I launched was coincidentally the 500th site on that server, each of them having an access.log and an error.log. These two files apparently constantly open on each site for apache to log to. He was able to change www-data's ulimit to a lot higher, (I don't recall now what it was) and that gave a lot more leeway in how many sites the sites server could host.

 · · ·  python  linux  ulimit  bugs
Next → Page 1 of 2