Sep 26, 2023
I had a bit of a "Tyrel you know nothing" moment today with some commandline tooling.
I have been an avid user of ZSH for a decade now, but recently I tried to swap to fish shell.
Along the years, I've maintained a lot of different iterations of dotfiles, and shell aliases/functions.
I was talking to a friend [citation needed] about updating from exa to eza and then noticed I didn't have my aliases loaded, so I was still using ls directly, as I have alias ls="exa -lhFgxUm --git --time-style long-iso --group-directories-first" in my .shell_aliases file.
I did this by showing the following output:
Because I expected it to show me which alias was being pointed to by ls.
My friend pointed out that "Which doesn't show aliases, it only points to files" to which I replied along the lines of "What? No way, I've used which to show me aliases and functions loads of times."
And promptly sent a screenshot of my system NOT showing that for other aliases I have set up. Things then got conversational and me being confused, to the point of me questioning if "Had I ever successfully done that? Maybe my macbook is set up differrently" and went and grabbed that.
Friend then looked at the man page for which, and noticed that there's the --read-alias and --read-functions flags on which, and I didn't have those set.
I then swapped over to bash "Maybe it's a bash thing only? I'm using Fish".
Nope, still nothing! Then went to google, and it turns out that ZSH is what has this setup by default.
Thank you "Althorion" from Stackoverflow for settling my "Yes you've done this before" confusion.
It turns out that ZSH's which is equivalent to the ZSH shell built-in whence -c which shows aliases and functions.
After running /usr/bin/zsh and sourcing my aliases (I don't have a zshrc file anymore, I need to set that back up), I was able to settle my fears and prove to myself that I wasn't making things up. There is a which which shows you which aliases you have set up, which is default for ZSH.
$ which ls
ls: aliased to exa -lhFgxUm --git --time-style long-iso --group-directories-first
Jan 10, 2023
New Year's eve eve, my main portable computer crashed. Rebooting to Safe mode, I could mount this MacBook's hard drive long enough to SCP the files over the network to my server, but I had to start that over twice because it fell asleep. I don't have access to rsync in the "Network Recovery Mode" it seems - maybe I should look to see if next time I can install things, it's moot now.
I spent all January 1st evening working on learning how Nix works. Of course, I started with Nix on macOS (intel at least) so I had to also learn how nix-darwin works. I have my dotfiles set up to use Nix now, rather than an INSTALL.sh file that just sets a bunch of symlinks.
I played around for a litle bit with different structures, but what I ended up with by the end of the weekend was two bash scripts (still working on makefile, env vars are being funky) one for each operating system rebuild-macos.sh and rebuild-ubuntu.sh. For now I'm only Nixifying one macOS system and two Ubuntu boxes. Avoiding it on my work m1 Mac laptop, as I don't want to have to deal with managing synthetic.conf and mount points on a work managed computer. No idea how JAMF and Nix will fight.
My filetree currently looks like (trimmed out a host and a bunch of files in home/)
.
├── home
│ ├── bin/
│ ├── config/
│ ├── gitconfig
│ ├── gitignore
│ ├── gpg/
│ ├── hushlogin
│ └── ssh/
├── hosts/
│ ├── _common/
│ │ ├── fonts.nix
│ │ ├── home.nix
│ │ ├── programs.nix
│ │ └── xdg.nix
│ ├── ts-tl-mbp/
│ │ ├── brew.nix
│ │ ├── default.nix
│ │ ├── flake.lock
│ │ ├── flake.nix
│ │ ├── home-manager.nix
│ │ └── home.nix
│ └── x1carbon-ubuntu/
│ ├── default.nix
│ ├── flake.lock
│ ├── flake.nix
│ ├── home-manager.nix
│ └── home.nix
├── rebuild-macos.sh
└── rebuild-ubuntu.sh
Under hosts/ as you can see, I have a brew.nix file in my macbook pro's folder. This is how I install anything in homebrew. In my flake.nix for my macos folder I am using home-manager, nix-darwin, and nixpkgs. I provide this brew.nix to my darwinConfigurations and it will install anything I put in my brew nixfile.
I also have a _common directory in my hosts, this is things that are to be installed on EVERY machine. Things such as bat, wget, fzf, fish, etc. along with common symlinks and xdg-config links. My nvim and fish configs are installed and managed this way. Rather than need to maintain a neovim config for every different system, in the nix way, I can just manage it all in _common/programs.nix.
This is not "The Standard Way" to organize things, if you want more inspiration, I took a lot from my friend Andrey's Nixfiles. I was also chatting with him a bunch during this, so I was able to get three systems up and configured in a few days. After the first ubuntu box was configured, it was super easy to manage my others.
My home/ directory is where I store my config files. My ssh public keys, my gpg public keys, my ~/.<dotfiles> and my ~/.config/<files>. This doesn't really need any explaination, but as an added benefit is I also decided to LUA-ify my nvim configs the same weekend. But that's a story for another time.
I am at this time choosing not to do NixOS - and relying on Ubuntu for managing my OS. I peeked into Andrey's files, and I really don't want to have to manage a full system configuration, drivers, etc. with Nix. Maybe for the future - when my Lenovo X1 Carbon dies and I need to reinstall that though.
Jun 02, 2022
This was written in 2017, but I found a copy again, I wanted to post it again.
The Story
For a few months last year, I lived around the block from work. I would sometimes stop in on the weekends and pick up stuff I forgot at my desk. One day the power went out at my apartment and I figure I would check in at work and see if there were any problems. I messaged our Lab Safety Manager on slack to say "hey the power went out, and I am at the office. Is there anything you'd like me to check?". He said he hadn't even gotten the alarm email/pages yet, so if I would check out in the lab and send him a picture of the CO2 tanks to make sure that nothing with the power outage compromised those. Once I had procured access to the BL2 lab on my building badge, I made my way out back and took a lovely picture of the tanks, everything was fine.
The following week, in my one on one meeting with my manager, I mentioned what happened and she and I discussed the event. It clearly isn't sustainable sending someone in any time there was a power outage if we didn't need to, but the lab equipment doesn't have any monitoring ports.
Operation Lab Cam was born. I decided to put together a prototype of a Raspberry Pi 3 with a camera module and play around with getting a way to monitor the display on the tanks. After a few months of not touching the project, I dug into it in a downtime day again. The result is now we have an automated camera box that will take a picture once a minute and display it on an auto refreshing web page. There are many professional products out there that do exactly this, but I wanted something that has the ability to be upgraded in the future.
Summary of the Technical Details
Currently the entire process is managed by one bash script, which is a little clunky, but it's livable. The implementation of the script goes a little like:
- Take a picture to a temporary location.
- Add a graphical time stamp.
- Copy that image to both the currently served image, and a timestamped filename backup.
The web page that serves the image is just a simple web page that shows the image, and refreshes once every thirty seconds.
The Gritty Technical Details
The program I'm using to take pictures is the raspistill program. If I had written my script to just call raspistill every time I wanted a picture taken, it would have potentially taken a lot longer to save the images. This happens because it needs to meter the image every time, which adds up. The solution is Signal mode and turning raspistill into a daemon. If you enable signal mode, any time you send a SIGUSR1 to the process, the backgrounded process will then take the image.
Instead if setting up a service with systemd, I have a small bash script. At the beginning, I run a ps aux and check if raspistill is running, if it's not, I start up a new instance of raspistill with the appropriate options and background it. The next time this script runs, it will detect that raspistill is running and be almost a no-op.
After this, I send a SIGUSR1 (kill -10) to take the picture which is then saved, un-timestamped. Next up I call imagemagick's convert on this image, I crop out the center (so I couldn't use raspistill's "-a 12" option) because all I care about is a 500x700 pixel region.
This is then copied to the image that is served by the web page, and also backed up in a directory that nginx will listen to.
Jan 28, 2015
When I worked at Propel Marketing, we used to outsource static websites to a third party vendor, and then host them on our server. It was our job as developers to pull down the finished website zip file from the vendor, check it to make sure they used the proper domain name, (they didn't a lot of the time,) and make sure it actually looks nice. If these few criteria were met, we could launch the site.
Part of this process was SCPing the directory to our sites server. The sites server was where we had Apache running with every custom static site as a vhost. We would put the website in /var/www/vhosts/domain.name.here/ and then create the proper files in sites-available and sites-enabled (more on this in another entry). After that the next step was to run a checkconfig and restart Apache.
Here's where it all went wrong one day. If I can recall correctly, my boss was on vacation so he had me doing a bit of extra work and launching a few more sites than I usually do. Not only that, but we also had a deadline of the end of the month which was either the next day, or the day after. I figure I'll just setup all mine for two days, and then have some extra time the next day for other things to work on. So I started launching my sites. After each one, I would add the domain it was supposed to be at into my /etc/hosts file and make sure it worked.
I was probably half way done with my sites, and suddenly I ran into one that didn't work. I checked another one to see if maybe it was just my network being stupid and not liking my hosts file, but no, that wasn't the problem. Suddenly, EVERY SITE stopped working on this server. Panicking, I delete the symlink in sites-enabled and restart Apache. Everything works again. I then proceed to put that site aside, maybe something in the php files breaks the server, who knows, but I have other sites I can launch.
I setup the next site and the same thing happens again, no sites work. Okay, now it's time to freak out and call our sysadmin. He didn't answer his phone, so I call my boss JB. I tell him the problem and he says he will reach out to the sysadmin and see what the problem is, all the while I'm telling JB "It's not broken broken, it just doesn't work, it's not my fault" etc etc. A couple hours later, our sysadmin emails us back and says he was able to fix the problem.
It turns out, there's a hard limit to the number of files your system can have open per user, and this was set to 1000 for the www-data user. The site I launched was coincidentally the 500th site on that server, each of them having an access.log and an error.log. These two files apparently constantly open on each site for apache to log to. He was able to change www-data's ulimit to a lot higher, (I don't recall now what it was) and that gave a lot more leeway in how many sites the sites server could host.
Jan 09, 2015
I had a friend complain that he had to keep adding his ssh key to his ssh-agent every time he rebooted. I have a really easy bit of shell code you can put into your .bashrc or your .zshrc file:
SSHKEYFILE=/path/to/your/ssh/key/file
ssh-add -l | grep -q $SSHKEYFILE
RC=$?
if [ $RC -eq 1 ]
then
ssh-add -t 86400 $SSHKEYFILE
echo "Added key internal to keychain."
fi
This will check every time your bash or zsh rc file is sourced for your key in the currently added keys, and if it's not, it will add it.
This has the benefit of not requiring you to put in your password every time you connect to a remote machine if you password your ssh keys (which you should).
Jun 21, 2014
A lot of times when I stop at someone's computer and help them in the terminal, I use a Readline command and people say "How the heck did you do that?"
Let me first backup and explain what Readline is. From the GNU Readline Documentation - "The GNU Readline library provides a set of functions for use by applications that allow users to edit command lines as they are typed in." By default, Readline is set up in Emacs mode, no you don't have to have an extra four fingers to use Readline, most of the commands are simple.
Here are a couple of the commands I use daily:
Movement
- To move to the beginning of a line, you press C-a
- To move to the end of a line you press C-e
Killing and Yanking
- To cut the rest of the line from where your cursor is, to the end, you press C-k
- To delete one word you press C-w
- To paste either of the two previous back you can press C-y
Miscellaneous
- To clear the screen and get to a fresh start, you can press C-l
- To end your session you can send a C-d (This will send an end of file character)
- To search for a command you typed recently, press C-r and start typing, it will search backwards. C-r again will search for an earlier match.
- The inverse of C-r is C-s, they function the same.
- To open your $EDITOR to edit the current shell command you wish to write, press C-x C-e
Finally, don't forget about C-c. While not specifically Readline, it's very useful because it sends the SIGINT signal to the program, which if just on the command line, will not execute the line you have type, and give you a new line with nothing on it. A nice clean start.
To find out a lot more, read the documentation at the Readline Commands Docs I even learned some things while writing this up, apparently pressing C-x $ will list off all the possible usernames. Good to know, and good to always keep learning.
Mar 08, 2012
I realize I haven't updated in a while. I haven't had much free time recently as I've been working on a project for my father in C# after work hours. This is a great change from only working in Python and JavaScript recently. I'm making a program that will analyze test results from a plasma torch for a company called HyperTherm. My father built the physical machine, but the employees want something that they can better see the results of a passed torch unit, or a failed torch unit. This program has a bar code scanner that scans the tool used in the test and matches it up to the lot of torch parts. Another added feature is the ability to print a white label that says "UNIT PASSED" or a giant red label that says the unit failed and which of the 8 tests failed were.I had to learn how to use delegates, as my serial event listener is on a separate thread and I can't update labels, or parts of the User Interface without them. Still working on it, hopefully will wrap it up by Saint Patrick's day.
I recently found a cool command in BASH that I hadn't previously known. C-o will execute the current line, and then bring the following line up from BASH history. If you have a set of commands you want to execute again, rather than having to press up 20 times, hit enter, press up 19 times, hit enter, and so on… You can just hit up 20 times. Press C-o as many times as you need to.
For example:
$ touch a
$ touch b
$ touch c
# [up] [up] [up]
$ touch a [C-o]
$ touch b [C-o]
$ touch c [C-o]
As you can see there, all I had to do was go back to the $ touch a line, and hit control-o three times and it touched the files again!
Jan 13, 2012
Today we had a problem at work on a system.
Without getting into too much detail as to give away secrets behind the verbal NDA I am behind, I will just say that it had to do with a GPG public key of mine that was expired on a dev machine, accidentally propagating during install to a production machine.
This key had a sub key as well, so figuring out this was tricky.
To start, you can list your gpg keys like so:
This will list keys such as
pub 4096R/01A53981 2011-11-09 [expires: 2016-11-07]
uid Tyrel Anthony Souza (Five year key for email.)
sub 4096R/C482F56D 2011-11-09 [expires: 2016-11-07]
To make this not expire, (same steps to change expiration date to another time), you must first edit the key
$ gpg --edit-key 01A53981
You will then see a gpg prompt gpg>
Type "expire" in and you will be prompted for how long to change it to
Changing expiration time for the primary key.
Please specify how long the key should be valid.
0 = key does not expire
<n> = key expires in n days
<n>w = key expires in n weeks
<n>m = key expires in n months
<n>y = key expires in n years
You are then done setting the expiration on the primary key, if you have sub key, doing this is as easy as typing key 1 and repeating the expiration step.
To finish and wrap things up, type save and you are done.