Clicky

Computable Multiverse

Hi, I'm Rakhim. I teach, program, make podcasts, comics and videos on computer science at Codexpanse.com. You can learn more about my work and even support me via Patreon.


Moved from Jekyll to Hugo and ox-hugo

I have changed the setup for this blog from Jekyll + Github to Hugo + ox-hugo + Netlify. The main goal was to be able to write blog posts from within Emacs and reduce as much traction as possible. Also, Org mode is much more comfortable to write in compared to any Markdown editor I’ve tried.

Previous setup

I’ve been using Jekyll and Github pages for a long time, and it was generally a good experience. I don’t have big complaints about Jekyll. It can be a bit clunky when it comes to things like tags, but I don’t use them anyway. My Russian blog is still powered by it. One thing that is never fun — the need to manage Ruby environment and dependencies. Some people prefer to encapsulate everything into Docker containers, and I’ve tried that with Jekyll as well, but the overhead complexity is not worth it.

I was using Sublime Text or sometimes iA writer to write posts. The whole process was full of small steps that added friction. I fully acknowledge that this sounds like “the tools stopped me from being a prolific blogger, if only I had better tools” fallacy.

This is how it looked like for the most part:

  1. Go to iTerm, navigate to my blog directory and start Jekyll server.
  2. Open the project in Sublime.
  3. Create a new Markdown file with a correct name (e.g. 2018-01-11-be_bored.md). I have a bash script to quickly create a new file with some front-matter inserted by default.
  4. Go to browser, reload page, open post.
  5. Write Markdown in Sublime, reload page to see result.
  6. Push to Github when ready.

Sometimes thing go bad and Github build fails. There is no clarification, and on rare occasions I had to contact support to find out the actual build error output. GitHub’s support is excellent, but this process is no fun.

New setup

Now I use Hugo static site generator, but don’t write Markdown myself. I write in Org mode (I talked about it in EmacsCast episode 2) and use ox-hugo to generate Markdown files for Hugo to then generate static HTML. Yeah, seems like too many moving parts for the sake of the simplest page possible, but it works remarkably well and — worst case scenario — if Emacs or Org or ox-hugo go bad, I can go back to essentially the same process as before.

This is how it looks like:

  1. Go to Emacs, open my blog project (one second worth of key strokes thanks to Projectile and Helm, which were also mentioned in EmacsCast episode 2).
  2. Open shell buffer, start Hugo server, open browser.
  3. Write new post. All posts are stored in a single Org file, so I don’t need to create new files. The name of the final Markdown file is generated automatically from the post title.
  4. Save Org file. New post is generated and browser is redirected or refreshed.
  5. When ready, change the Org status of the section to DONE.
  6. Use Magit or a single Bash script to add, commit and push files to Github.
  7. Netlify picks up the commit and builds the pages. If something goes wrong, I can see the detailed build logs.

And with Org capture I can create a new draft from anywhere in Emacs with two key strokes.

Nice things about Hugo

There are several small things that make Hugo nicer than Jekyll for me:

  1. With hugo server -D --navigateToChanged the browser navigates to the changed file automatically and refreshes the page on each change. No need to refresh the page manually! Instant Markdown preview.
  2. Hugo is distributed via Homebrew, and I don’t need to care about Ruby environment and dependencies like I had to with Jekyll.
  3. I have several sites, and Hugo randomizes the port if the default port is in use. A tiny nice detail.
  4. It seems much faster than Jekyll.

Nice things about Org and ox-hugo

While this transition was mainly performed due to workcrastination, I’m pretty happy with the results. Hugo itself wouldn’t be the reason to switch, it’s the combination of Org + ox-hugo + hugo that makes it all worth the hassle.

Writing in Org is arguably a more pleasant experience compared to Markdown. Being able to integrate blogging into the same program that is used for planning, programming and long-form writing is very nice.

The whole blog setup, including this custom theme is available on Github.

September 3, 2018 | permalink

Agile Food Truck

August 24, 2018 | permalink

Reactive solution

August 24, 2018 | permalink

Spaceship Problems

August 19, 2018 | permalink

Podcasting is not walled (yet)

I’ve launched a podcast recently (EmacsCast) and received lots of feedback. One of the perplexing things people said was “that’s great, but how do I subscribe, it’s not on iTunes/Google Podcasts?

I had a similar experience 10+ years ago when I started podcasting, but today it’s much worse. It worries me a lot.

Podcasts are simply RSS feeds with links to media files (usually mp3s). A podcast is basically a URL. And podcast clients are special browsers. They check that URL regularly and download new episodes if the content of the URL changes (new link added). That’s it, no magic, no special membership or anything else required. The technology is pretty “stupid” in a good way.

Video podcasts were a thing! It was basically distributed, un-censorable YouTube where both viewers and creators had full control over their things.

Ever since tech companies started waging war against RSS, podcast distribution became visually RSS-free. What do you do to subscribe? Easy, just search in the app! For the majority of iOS users that app is Apple Podcasts, and recently Google made their own “default client” for Android — Google Podcasts.

It looks like podcast clients are similar to web browsers and just provide a way to consume content, but the underlying listings make them very different. Corresponding services are actually isolated catalogs. When you perform a search on Apple Podcasts, you aren’t searching for podcasts. You are searching for Apple-approved podcasts. And if the thing you’re looking for is not there, then… well, you get nothing.

Imagine web browsers worked that way. You want to visit my blog, but you don’t know what URLs are, you’ve never seen or heard of things like https://rakhim.org. There’s no address bar in your browser, just a “search catalog” field.

I tell you my name and you type it into Safari. If my blog was already added to Apple Websites Catalogue, then great, you can visit my site.

But if I haven’t added my blog to their catalog, or, even worse, I’ve tried, but it wasn’t approved for some reason, then I’ll be just sitting here, shouting “dude, just go to https://rakhim.org, it’s there!”, but you’d have no idea what to do with that information. For you, my site doesn’t exist.

future_of_the_web.png

Your browser is capable of reaching my blog, but the feature is hidden or about to disappear. I can teach you to find some obscure menu item, but I can’t do it for all potential visitors.

Most Podcast clients still accept RSS. Apple Podcasts, iTunes, PocketCasts, OverCast, PodcastAddict.

itunespodcast.png

iospodcasts.jpeg

Google Play Music doesn’t say anything explicitly, but you can just put RSS URL into the search field and it works. For now. I won’t be surprised if these apps gradually and silently remove this feature.

Lately, there are lots of discussions about Apple, Facebook, YouTube, and Spotify banning Alex Jones’s Infowars podcast. And most of the debate comes down to either “they have a right” or “they shouldn’t censor”.

The thing is, this shouldn’t matter. If some company decides not to include an URL in their catalog, this shouldn’t really matter. An URL is an URL, the content is there. Both Apple and Google are pretty much hiding the feature that makes podcasting as free and un-censorable as websites.

And it works! For most people, RSS for podcasts don’t exist, and a corporation is in charge of what they can listen to.

Last time I was talking about programmers’ responsibilities when it comes to software reliability. Today I want to add: we, developers and software engineers, also have the moral responsibility to educate the public about the way the internet was supposed to be. Open, un-censorable, with underlying protocols put in place to ensure it’s a forest, not a walled garden.

August 7, 2018 | permalink

We shouldn't let people get used to the idea that software fails

Society has certain expectations when it comes to engineering and technology. We expect buildings and bridges to basically never fail. We expect cars to be extremely safe and fool-proof, as much as explosions-based metal boxes moving at hundred miles per hour can be safe and fool-proof. Home appliances are supposed to work for years without any serious maintenance. Electricity is just there, always. And when it goes out, we are incredibly frustrated and surprised.

The high expectation of engineering runs deep into relevant industries and educational institutions. Engineers take oaths!


The Obligation of The Engineer

I am an Engineer.

In my profession I take deep pride. To it I owe solemn obligations.

As an engineer, I, (full name), pledge to practice Integrity and Fair Dealing, Tolerance, and Respect, and to uphold devotion to the standards and dignity of my profession, conscious always that my skill carries with it the obligation to serve humanity by making best use of the Earth’s precious wealth.

As an engineer, I shall participate in none but honest enterprises. When needed, my skill and knowledge shall be given without reservation for the public good. In the performance of duty, and in fidelity to my profession, I shall give the utmost.


It might seem like I’m about to start a rant about buggy OS updates and how software engineering must be held accountable as much as industrial engineering. I’m not. Just to be clear: yes, I think the state of modern software development is ridiculous. Yes, seems like it’ll take quite some time and quite a few technological catastrophes until we take software more seriously.

What worries me more than the current state of the things is how it affects the mindset of society. These high expectations of engineering are reversely mirrored in software. We have learned to expect software to fail.

Just a few decades ago it was different. Software developers were the only ones who sometimes expected software to fail. The general public had high hopes, and journalists helped build it.

The rise of information technology was akin to the rise of mechanization of the early XX century. We could see the analogy clearly: in the 20s and 30s everyone in the developed world started having and using cars, washing machines, air conditioners and other wonderful devices. They were new, yet incredibly reliable. Every year they became better and cheaper.

In the 80s and 90s, everyone in the developed world started having and using computers and mobile phones. And at first, the process looked very similar. The first personal computers were clunky and weird, but pretty reliable. I’d say, incredibly stable and relatively fast compared to modern computers.

Of course, they were million times less complex and had fewer features. But hey, here is something to consider: I’m typing this in a macOS app on a very powerful 5K iMac computer. It’s just text, and I’m going to publish it on the web. The whole experience is mediocre at best. The software is not blazing fast, even though it’s a native app for text editing (it has very high app store rating and it’s one of the apps selected by the app store editors). The typing experience is laggy on a Bluetooth keyboard. To publish this, I’m going to interact with a browser, which is at this point the whole other operating system with another layer of delays. The only thing that really feels “20 years better” is the connection speed itself.

Almost slipped into a rant there.

Personal computers felt like the new “cars for everyone”. We all get computers! They will get better, faster and cheaper every year! They are becoming the backbone of the world, as much as motor vehicles did.

It all started to change with the Web. The Web was the first global technological phenomena that was built and maintained by the amateurs. Computer hardware, software, and the internet itself were built by mathematicians and engineers. The Web was built by people like me.

Mathematicians and engineers are tightly connected to the academia, and they operate with ideas like “proof”, “peer review” and “oath”. While many software developers operate with ideas like “ship early, ship often”, “move fast and break things” and “If you are not embarrassed by the first version of your product, you’ve launched too late”.

This isn’t bad in the long run. I think the society will go through The Age of the Amateur and get better in the end.

Amateurs were making things all the time. Up until the recent years, we just couldn’t make our amateur inventions ship to millions of people that easily. When amateurs invented something potentially useful, it took quite a few iterations and stages to get it to the public. By that time it became more refined and tested and stable, thanks to regulations, selection and just time.

Today I can make a bad piece of software that does something interesting, and potentially millions of people will get frustrated or harmed.

If you asked a member of the general population in the 90s or early 2000s about computers and the internet, I bet most of them would sound optimistic. That’s what everyone is talking about, right? Soon, we will do everything with computers! Computers are super smart!

While this is anecdotal evidence, today most of the people I know are frustrated with technology. Apps are buggy, the web is filled with ads and intrusive useless notifications (would you like some cookies?), touch screens everywhere suddenly made simple things like washing machines and car control panels barely usable.

We just got used to that. Electronics is something that’s wonky and buggy. That’s what we expect.

And this is scary.

When a society expects certain industries or institutions to fail or do harmful things due to lack of quality control or unprofessionalism, we think that society is in bad shape. When we expect politicians to fail, we revolt. When we expect medicine to fail, we protest. When we expect infrastructure to fail, we at least write angry letters to our representatives and try to make some difference through political means.

We can’t do much when software fails. Unlike science, there’s no public accountability. Even in those shrinking areas where there’s still competition, there isn’t much choice: all software fails. I, for one, can only name a couple of software products that are rock solid. But they don’t exist on their own, they operate on top of multiple other layers (hardware, OS, frameworks, web, etc). And it’s unlikely that all the layers are as good.

I’m in no position to propose a better way for the world to evolve. But I believe that computer scientists and software engineers have the moral responsibility to educate the public about the way software should be. We shouldn’t let people get used to the idea that software inevitably fails.

(Discussion on Lobste.rs)

July 11, 2018 | permalink

Products aren't for people yet

Remember how your parents would try to use Windows 95 or something like Norton Commander. They’d copy an app shortcut to a floppy disk and be amazed how much stuff they were able to put inside. All the games, and lots of space left! And you’d think they don’t understand anything at all, they are just clicking pretty much randomly, hoping this magic machine would at some point understand them and do the right thing. That was the time when programmers were building products for programmers.

It wasn’t awesome, but it was sincere. Nobody was pretending that software was built for regular people, and the consensus was: in order to use a computer you have to learn something. Or know someone who can help.

Today we’re living in times when programmers are building products for people believe that they’re building products for people.

This isn’t awesome and it’s not sincere. Today all of us regularly feel like our parents with Windows 95. You have to learn or, more commonly, just remember how to use a certain website or an app. And I’m not talking about small pieces of software, I’m mostly talking about huge, global products: Google’s user interfaces are chaos and madness, Facebook is madness and mess, Android is mess and vertigo. Trying to do something non-trivial, not the most basic thing reminds me of good old pixel-hunting quest games: What if I press here? Is it a button? Can I drag this? Oh, this is text, but it’s clickable, and this is a button, but it looks like a text…

Using these interfaces on a daily basis feels like a dream to me. You know, like in a dream you’re trying to run, but it’s futile. And everything changes chaotically, and nothing makes sense. And it’s so nice when it rains, but sausages are going to burn. You know. Meanwhile, Google and Facebook are doing AB-tests: “hmm, if we make this button less button-y and move it to the left, more people would click it”. So we get an update to a bad UX.

It’s hard to build painless products in general. But it’s especially hard for programmers to build painless products because in order to become a programmer we all had to endure suffering consistently. This is the unfortunate reality: you have to eat lots of shit to become a software developer. So we have a higher tolerance to UX pain, and this affects the way we design products and interfaces.

I’m talking about our tools, of course: languages, libraries, frameworks. You’d like to get into the creative process and explore interesting problems and abstractions, but first, you have to deal with versions, dependencies, compatibility, bugs, updates, and other accidental complexity. This is a vicious circle because programming tools are created by programmers.

Of course, we aren’t doing this because we love pain. Lots of progress is made all the time. Unfortunately, often, users pay for this progress. Architectural solutions based on programmer’s convenience affect end-user experience, design, and UI. My favorite example is Atom. A text editor. Still works with visible lag on an insanely powerful multi-core, multi-gigabyte machine. This text editor performs worse than one would 30 years before. Because Electron is a nice development tool.

Electron is not bad, the situation is bad. The situation where Electron is a good choice is pretty bad.

We can say bad things about programmers, rant about how we can’t write code anymore, how frontend web developers are insane and npm is considered harmful. But it seems like on the macro level this is just a transition period. Some day we’ll get to the point where programmers would build products for people, and, hopefully, we’ll think of it as “people are building products for people”.

February 4, 2017 | permalink

Justify when reducing user's freedom

Links should not forcefully open in a new tab because by enabling this the designer takes away some of the freedom from the user. Without target="_blank the user has a choice: open here or in a new window. With target="_blank" the user has no choice.

I will not add background music in my podcasts because this would take away some of the freedom from the listener. They can add background music to the speech, but they can’t remove the music if it’s present in the source.

Of course, some links require the new tab and some podcasts require music. This is not a rule about web interfaces or podcast production. It also doesn’t mean “allow as many options as possible”. This is just a reminder: if your idea reduces the user’s freedom, you have to justify it.

January 19, 2017 | permalink

What is binary?

My attempt to explain binary numbers.

December 2, 2016 | permalink

Backups

I’m not going to try to convince you to backup data. I didn’t do backups for the most of my life, except for some photos and videos here and there. And those weren’t really backups, more like archives on external HDD’s.

I use computers every day since, gosh, I guess 5th grade, and never ever did any hard drive fail on me. This is remarkable. Screens, keyboards, mice, even fans have died over the years, but not hard drives. Good old magnetic, noisy, spinning monsters.

Backup evangelists love to say how “your HDD will fail, it’s inevitable”. Well, yes, it will fail inevitably as you use it for years, but the truth is — you’ll more likely switch computers before your drive fails.

But the truth is, it’s extremely unlikely that you’ll ever need a backup. You’re probably good. I prefer to think about this stuff as an insurance. This is how you buy peace of mind. There are so few aspects of life where you can actually do that — pay some money (not much, too, which is great), and get some peace of mind. Medical insurance, for example, is sort of like that, but not really that good. With data backup, I can be pretty sure I will get everything back as it used to be, effectively travel back in time. Recovered data is precisely the same as lost data, so it’s not really lost anymore. While my body after medical treatment is not the same anymore.

I’m not gonna go all “3-2-1 backup” on you. That’s the idea that in the perfect world you need at least 3 total copies of your data, 2 of which are local but on different devices, and at least 1 copy offsite. So, if all of your files are on your computer, then you need two external hard drives and a remote hard drive (maybe one at work, in another house or in the cloud). I don’t do that yet. For now my backup strategy looks like this:

  1. Offsite backups with Arq (Dropbox and Amazon Cloud Drive)
  2. Local backups with Time Machine and Carbon Copy Cloner

Most of the things I work on daily are on Github (personal and work), Dropbox (personal) and Google Drive (work).

Let me first explain why I said no to Backblaze and Crashplan. Long story short: I don’t trust them.

Backblaze

Backblaze is a beautiful, sleek guy who says “don’t worry about it bro”. Mac client is minimal and cool, and it “just works”.

There are few issues with Backblaze:

  • It’s not really a backup solution. If you delete a file from your computer, then in one month it will also be deleted from Backblaze’s servers. It syncs stuff just like Dropbox does. This is why Dropbox in and of itself is not a proper backup solution.
  • If you backup an external drive and disconnect it from your computer, then Backblaze will delete that backup from their servers.
  • Backblaze doesn’t backup 100% of files. You can remove some exceptions manually, but some of them are built in. So, I can’t really have a complete copy of my boot disk, for example.
  • If you need to restore files from Backblaze, you’re gonna have a bad time. Your options are: download a Zip-archive (if your file is 10 levels deep, then you’ll get all the upper level folder structure in the archive) or get a flash drive or a HDD via mail. You can’t restore files in place.
  • Some users say it’s very fast, some say it’s very slow. I can’t figure out the reasons, but for me it was dead slow. It took almost 120 hours (5 days) to upload less than 200G of data.
  • Android app seems to be made by a very intelligent puppy.

Crashplan

Crashplan is a douchey-looking guy in an expensive suit who says “the synergy is just overwhelming in this merger”. It took me a while to understand how it really works. It’s called CrashPlan Online Backup and it can backup, among other places, to your external hard drive. You know, because online.

But once you get it, it’s pretty great in theory. With Crashplan you can backup to any local hard drive and offsite machine (like friend’s computer or any other machine in your network) for free, and with additional fees you can also backup to Crashplan’s cloud.

  • If you use their cloud, then it’s really a backup solution. All your files are stored in the cloud forever*. Unlike Backblaze.
  • You can restore any file to its original location or any other location. No need to download Zip-files from a cumbersome web interface.

(* not forever)

Crashplan Mac app is… well… ugly like hell.

Look at this Java shit.

I was happy with Crashplan for the first few days, and possibilities of adding more backup destinations if I decide to go all “3-2-1” was reassuring. But it turned out I can’t trust it.

Crashplan, like any other software of that type, is supposed to work in the background, doing its thing while I do mine. I was restructuring files in my photo archives, moving files from folder to folder, renaming stuff. Nothing extreme. But in few days when Crashplan said it backed everything up, I tried to restore some files to check how it works. And it just lost the whole photos folder, tens gigabytes of photos. That was the folder I was fiddling with.

I understand this is unfair. Trying to make a reliable copy of the file system while it changes is hard. But:

  1. Should it fail, it must fail gracefully. Losing all the files is unacceptable.
  2. The end user should never worry about this stuff. I wasn’t doing some crazy hacking, I just moved files around.

Oh, and Crashplan’s Android app seems to be made by a very intelligent puppy as well. Is there a software company ran by puppies I know nothing about?!

Arq

Arq is only an app for your Mac or PC. It doesn’t offer any cloud backup storage itself. It’s not even a guy like Backblaze or Crashplan. It’s a faceless, soulless robot who says nothing. This is what backup software should be like.

Arq can backup to Amazon Cloud Drive, AWS, Amazon Glacier, Google Drive, Google Cloud Drive, Dropbox, OneDrive, your SFTP server or NAS. You can set multiple sources and destinations. For example, I have the following setup:

  1. Home folder → Dropbox
  2. Photo archive → Dropbox and Amazon Cloud Drive
  3. Podcasting archive → Dropbox
  4. Current video projects → Dropbox and Amazon Cloud Drive
  5. Work-related podcasting archive → Google Drive

And at any point I can add other destinations, including a network attached storage.

My Mac’s SSD is just 120G, but my Dropbox is 1TB. It’s great to finally make all that space useful by setting Dropbox as a destination for Arq. And Amazon Cloud Drive is just a great deal — unlimited storage for $60 per year. Of course, you have to remember, that it’s not really unlimited, and if you go crazy, Amazon is allowed to kick you out.

Some cool features of Arq include:

  • Local encryption. Arq encrypts files before sending it, and sends it with SSL.
  • Backups are stored in open documented format. Even if Arq dies and completely disappears, your encrypted data is safe and accessible.
  • A very nice and straight-forward native app with some advanced features like CPU usage, upload rate, scheduling, data validation, budget restrictions (relevant for AWS, for example). You can also make Arq run shell scripts before and after backup sessions.
  • Arq truly backups everything. All the files, any format, any size (even crazy tens-of-gigs files) without restrictions.
  • Fixed price and no recurring fees.

Arq also can archive: backup a folder, click “Detach…” and Arq will stop backing it up, but will store the previous backups indefinitely. This is great if you want to backup some external drives that rarely update.

Local backups: Time Machine and Carbon Copy Cloner

My photos and work-related audio- and video-projects live on an external drive and are backed up to the cloud along with the home folder. The whole internal SSD gets only local backups, because it’s not that important. There are two things I need from this system:

  1. Restore the system to a previous state. І never needed this with Macs, but I just feel safer this way when I upgrade the OS.
  2. Boot from USB drive if internal drive fails. As I said, I didn’t have failing drives (neither HDD’s nor SSD’s) in my life, but SATA cables do fail sometimes. Should that happen, I can just boot from an external drive and continue working until the problem is solved.

Time Machine is good enough for quick restoration. I use Carbon Copy Cloner to make an external bootable copy. Alternatively, SuperDuper! is also nice. I like CCC more because it also copies the recovery partition, which is used to reinstall macOS. You don’t really need it, because all modern Macs come with a Network Recovery option.

That’s it folks.

November 29, 2016 | permalink