Work habits (of the rich and famous)

In this chapter, you're going to learn about...

Discipline

I'll bet you figured I'd begin this section with an earnest appeal to your better side. You know: rise before dawn each morning and put in a solid three hours' work before eating a light breakfast - making sure not to drop any toast crumbs into your busy laptop keyboard.
Nope. This one is mostly about taking breaks. Why? Because staring at a single thorny problem for too long can sometimes make it harder to think creatively about it. You're more likely to end up hopelessly running around in cognitive circles. A solution? Take some time off and come back later with fresh eyes.
Of course, that doesn't mean you should spend all the time between now and "later" obsessively checking your social media feeds or catching up with the latest sports scores. I've personally found that that kind of break isn't usually helpful - especially considering that I don't really follow any sports. Rather, doing something productive away from the computer or even focusing for a while on a different element of your project can be remarkably refreshing.
Ok. But where's the discipline part? Well, no matter how well organized and clever you are, it really does all come down to consistency. Sure, redirecting and managing your frustration can help, but no matter how tough things get, if you're not willing to climb back into the saddle and charge back to the battle, and the next battle after that, you won't end up accomplishing much.
This, obviously, is true no matter what you're trying to do in life. But perseverance can be especially valuable while learning to produce and deploy applications. You've probably already noticed that just making something compile or load once doesn't always guarantee it'll work the same way the next time. In this context, discipline can mean forcing yourself to test your solutions over and over again using different parameters until you really do understand what's going on under the hood...and why.
So, along with a workflow that's flexible and goal-oriented, never lose sight of the venerable Shampoo Principle: rinse and repeat.

Experiment...and fail

Nothing beats abject and humiliating failure. No, really. Getting something to work perfectly the first time should be such a disappointment, since it means you haven't really learned anything new about the way your technology is built. And it also means that there's a real disaster waiting for you at some point when your project is already in production. And those hurt much more.
So embrace failure.
Embracing failure, however, doesn't mean building a tolerance - or even a perverse thrill - for pain. Rather, it's about learning how to watch for error messages and unexpected system events, and how to find and interpret log messages.
Example? Seeing a message like "ImportError: No module named x" may seem like nothing more than an annoyance at first glance, but it's really just your computer politely telling you that there's a required Python module waiting to be installed. Running pip install x will quickly solve that problem.
Your OS will generate vast volumes of log data dutifully reporting on all system events. Just to illustrate how you might access some of that wisdom on a Linux machine, this journalctl command will return all recent log entries classified as "error". This can be useful for tracking down errors you know occurred without having to wade through thousands of lines of historical - or trivial - entries.
# journalctl -p err --since yesterday
Similarly, this next example will return all events relating to the Apache web server service running on Ubuntu. Doing the same thing on a CentOS or Red Hat system would use httpd in place of apache2.
# journalctl -u apache2
Avoid copying the code examples you find online in how-to guides and simply pasting them into your project. There are two good reasons for this.
Typing commands yourself also makes it more likely that, on a whim, you'll decide to experiment with small changes. Such curiosity will nearly always lead to good places. Unless, or course, the command you're talking about is the old Unix/Linux dd file system administration command. One wrong move with dd (often nicknamed "Disk Destroyer") can measurably reduce your long-term changes for happiness.
Having said all that, failed experiments can bring unwelcome consequences. Besides the risk of crashing your system altogether, playing around with a long line of extensions, plug-ins, and software packages can introduce conflicts and configuration rot. It can also make it hard to figure out exactly what caused a separate failure - or even a success.
In fact, after a while you might find your workstation has become plain old flaky. Without a reliable computer to work with, your productivity will grind to a painful halt.
The solution? Virtualize - the way Kevin did using VirtualBox as a sandbox environment at the end of chapter 2 (and as we'll describe in greater detail once we hit chapter 5). That way, the fallout from errors and bugs is limited to the virtual machine and should have no effect on your physical workstation.

Take notes

I'm sure you've been here before: something you're working on fails and you spend a few hours madly flailing around as you look for the solution. Finally, after building and rebuilding your environment and trying dozens of configuration setting combinations, it clicks and you're back in business. Exhausted but triumphant, you shut down your laptop and head to bed. It's 3:15 am.
And that's the end of the story. Or at least you thought that was the end of the story. A few months later you run into the exact same problem. Your first reaction is relief: "Well at least this time I know how to fix it." Except that you don't. You search through old code, logs, emails, and even your browser cache for clues, but it's a black hole.
Sound familiar? Well I can (just barely) feel your pain. Sure, it's happened to me, but because I'm now obsessive about fully documenting my operations I can barely remember the last time.
Here's how it should work.

Document

While you're still in "experiment" mode, make frequent copies of your code or commands. That might mean pushing updates to a Git repository, or saving changes to a plain text file. Just make sure that it's a complete record of your process containing enough information to allow you to rebuild. You can see an example of notes I created for viewers of my Pluralsight "Network Vulnerability Scanning with OpenVAS" course here: bootstrap-it.com/openvas. There are some brief meta directions, but it's mostly a sequential set of Bash commands.
Now what good will those notes do for you if your system crashes? So make sure you back up your notes (along with all your other working files) early and often. Ideally, that should include an off-site backup to a reliable cloud service like Amazon's S3. Oh, and wouldn't you know it, there are two chapters - 4 and 5 - covering just such processes in my "Linux in Action" book.

Test

Tech projects tend to have many moving parts. So before archiving your notes, you should probably confirm that they genuinely represent the working version.
How? Test them. That is, start over from a fresh environment and apply the commands or code in the exact sequence and format used in the notes. If it works as expected, you're in business.
Even better, you've also got yourself some base content from which you can automate many common tasks. That's coming up next.

Infrastructure automation using scripts

A script, in case you haven't yet been properly introduced, is a text file containing a list of system-level commands. Here's an example of a simple Bash script that will install and then restart the Apache web server software on an Ubuntu Linux machine:
#!/bin/bash
apt-get update
apt-get install apache2
systemctl restart apache2.service
The file, assuming it's called scriptname.sh can be made executable and then run from the command line using these two commands.
$ chmod +x scriptname.sh
$ sudo ./scriptname.sh
Of course, scripts can be much longer and more complex than that example, and can be made to do some pretty impressive things. And, of course, scripts can also be written and run on non-Linux operating systems using robust tools like, for instance, PowerShell on Windows. But I think you get the general idea.
I mention the subject here because automation should interest you...and scripts of one flavor or another are at the center of the automation revolution.
Visualize this: you're slaving away day and night learning how Docker containers can be used to deploy your web app. Wisely, you're spinning up VirtualBox servers for your testing. Launching a new Linux VM (based on a cloned image) doesn't take long, but babysitting the operating system through the Docker installation process can kill more than five minutes each time...and those are five minutes you'd rather not lose.
Would you like to cut that down to a single command that will babysit itself and still complete in less than a minute? Script it. For this example, head over to Docker's documentation pages, select the installation instructions for your operating system, note the commands you'll need to run, and then paste them into a script. All done. Although it wouldn't hurt to test it out in action just to make sure.
This isn't the place to demonstrate how all this works any great detail, but I would like to throw a few examples your way just to give you an idea of what's possible.

Provision Docker containers using a Dockerfile script

This file - named Dockerfile - will load an Ubuntu version 16.04 image, install the Apache web server software, create a simple web page with some content ("Welcome to my web site") to act as your web site root (index.html), and open port 80 to permit incoming HTTP browser requests from the internet. Not bad for just a few lines.
FROM ubuntu:16.04

RUN apt update
RUN apt install -y apache2
RUN echo "Welcome to my web site" > /var/www/html/index.html
EXPOSE 80

Build a Docker-based WordPress site on AWS Elastic Beanstalk

This single file, named Dockerrun.aws.json will launch two Docker containers using a very wide range of AWS infrastructure. The first container will run the MariaDB database engine as a site backend, and the other will host the WordPress program itself.
The two containers will talk to each other and, between them, provide a familiar WordPress web interface. Once the file has been run, all you'll need to do is to head over to the URL Elastic Beanstalk will show you and set up your site.
The process was properly explained in chapter 19 of my "Learn Amazon Web Services in a Month of Lunches".
{
    "AWSEBDockerrunVersion": 2,
    "containerDefinitions": [
        {
            "name": "mariadb",
            "image": "mariadb:latest",
            "essential": true,
            "memory": 128,
            "portMappings": [
                {
                    "hostPort": 3306,
                    "containerPort": 3306
                }
            ],
            "environment": [
                {
                    "name": "MYSQL_ROOT_PASSWORD",
                    "value": "password"
                },
                {
                    "name": "MYSQL_DATABASE",
                    "value": "wordpress"
                }
            ]
        },
        {
            "name": "wordpress",
            "image": "wordpress",
            "essential": true,
            "memory": 128,
            "portMappings": [
                {
                    "hostPort": 80,
                    "containerPort": 80
                }
            ],
            "links": [
                "mariadb"
            ],
            "environment": [
                {
                    "name": "MYSQL_ROOT_PASSWORD",
                    "value": "password"
                }
            ]
        }
    ]
}

Remotely manage a simple web server using an Ansible playbook

Ansible is a deployment orchestration tool that lets you automate the creation of full software stacks across vast fleets of remote servers. In other words, in theory at least, a single Ansible "playbook" can pretty much handle the administration of thousands of servers distributed across the internet.
The idea is that you compose one or more text files whose contents declare the precise state you want for all the system and application software on a specified machine. When run, the orchestrator will read those files, log on to the appropriate host or hosts, and execute all the commands needed to achieve the desired state.
Rather than having to go through the tedious and error-prone process manually on each of the hosts you're launching, you simply tell the orchestrator to do it all for you.
In this example (shamelessly stolen from chapter 16 of my Linux in Action), Ansible will log into each remote server in your webservers group - no matter how many of them there are - and add Apache and a locally sourced index.html file. Finally, it will confirm that the Apache service is running properly.
---
- hosts: webservers 

  tasks:
  - name: install the latest version of apache 
    apt: 
      name: apache2
      state: latest 
      update_cache: yes
  - name: copy an index.html file to the web root
    copy: src=index.html dest=/var/www/html/index2.html 
    notify:
    - restart apache
  - name: ensure apache is running
    service: name=apache2 state=started

  handlers:
   - name: restart apache 
     service: name=apache2 state=restarted
...And all in a dozen or so lines. Impressed? I sure am.

Case study

Since we last met him, Kevin has pushed ahead with his Bash-based DevOps explorations. He figures that each of his developers can push their code updates from their laptops to the company Git repository. He can then pull the new code into the virtual machine sandboxes he's using as staging servers. There, his Bash scripts will automatically incorporate the code into a live application which can run test connections with one of your company's vendors.
But something went wrong. The remote connection attempts are all timing out and the helpful IT team at the vendor's location reports that they are receiving data requests from your app, but the replies are failing.
Kevin carefully collects as much information as he can from the vendor and then searches through the logs on his local VM. Looks like a networking problem. Further research confirms that incoming requests to the VM were being blocked by the local NAT network. After some time and frustrating trials, Kevin managed to set up port forwarding through the local router so that traffic originating from the vendor - and only the vendor - would be allowed through.
Now, having read this chapter (at least up to the case study section, I guess), Kevin was careful to immediately document every setting and step he'd used in the process of successfully opening up the network. He also tested his solution from a clean VM, and then backed up the documentation to an off-site location.
Then he spent 20 minutes watching cute cat videos on YouTube. But he'd earned it.