I started with Linux in the late nineties, with a distro that came free on the cover of a magazine and a dial-up modem that the kernel had never heard of. The process was simple: boot it, hit an error, write the error down, go to college the next day, find the dependency, save it to a floppy, go home, try again. Repeat until either it works or you give up.

I made the classic mistake early on and accidentally wiped my main OS in the process. So there was no fallback. I was committed, in the way that only an accidental partition deletion can make you committed. Eventually I got it working, but drifted back to Windows 98, because playing PC games mattered more than the satisfaction of having figured it out.

That pattern — friction, persistence, eventual competence, and an honest assessment of whether it was actually the right tool — is why Linux stuck.

Sun terminals running Solaris at university reminded me what I liked about it: a clean interface, and the expectation that you’d drop into a terminal and understand what was going on. So I started again, with another free OS from the front of another Linux magazine — this time a version of SuSE, probably 6 or 7. It defaulted to KDE, which at the time felt a bit too colourful. That might be unfair — memory rarely is — but what I do remember is that getting SuSE working on a home machine with a network card and a cable router was much easier than my first attempt. Things had definitely improved.

And then wireless happened.

Around this time, 802.11b was the new thing. A Fedora Core 3 install on a laptop with an unrecognised PCMCIA wireless card sent me straight back into the weeds, digging through /proc just to identify the hardware well enough to build a driver for it. It felt like my first experience all over again — but this time I had a working desktop and a broadband connection, so at least I didn’t have to keep going into college to piece together dependencies one floppy at a time.

I used Fedora and Ubuntu for a few years, occasionally trying other distributions. I never cracked Arch — that always felt like more effort than I wanted to invest. But I experimented with all sorts of things: building over the network, installing over FTP, breaking and fixing systems just to understand them.

I also installed Linux on family computers. If I was going to be the person they called every time they clicked a terrible link or downloaded a “free” version of something that clearly wasn’t, then I was going to make my life easier by choosing an OS that simply didn’t have those problems in the same way.

My first real professional exposure came when I spotted a neglected RHEL server during a network scan I was running to identify vulnerabilities. I asked around — no one knew what to do with it. So I took ownership. There was no documentation, and I often had to stay late to do maintenance, so one evening I rebooted the box, interrupted the boot process, and reset the root password.

I added the credentials to the team password manager, and just like that — one reboot and a password change — I became responsible for the Linux estate. At first I thought that estate was one machine. It wasn’t. It was just the only one I’d found. Once I was “the Linux guy”, more systems kept appearing.

Since then, working with production RHEL, CentOS, Debian, and Oracle Linux across different environments has confirmed something I suspected early on: the skills transfer. The specifics change, but the underlying model stays the same.

Twenty-five years later, it’s still the environment I do my best thinking in.

Everything else is overhead.


What’s here

The Linux posts on this site span the full arc: physical servers built from scratch, shell scripting to make network management survivable, hardening procedures developed by reading CVEs rather than following checklists, and more recently — lab work in KVM, Oracle Linux, and whatever WebCenter needs me to pretend isn’t running on top of it all.

The Infrastructure to Systems series is the clearest thread through this material. It follows the progression from hands-on physical infrastructure work through to platform thinking, with Linux running underneath all of it.


Start here

If you’re a hiring manager looking for evidence of Linux depth: Learning at Layer 1 is the origin story, Building Linux Servers Before Automation is the production context, and Learning to Harden Linux shows the methodology.

If you’re interested in the lab work: Oracle WebCenter — The OVA File and Building an Oracle WebCenter 14c Lab From Scratch are the most recent examples of the approach — build a controlled environment, reduce the variables, learn the system rather than the symptoms.

If you want the full picture: start with Infrastructure to Systems and follow it in order.

Tags