What Works, What Doesn’t Work

An important lesson I’ve learned while working at a Startup is to do more of what works and jettison what doesn’t work, quickly. That’s the way to success, the rest is just noise and a waste of time. This lesson can be applied to everything in life.

Data is your friend

We generate data all the time, whether it’s captured in a database or spreadsheet, just by being alive you throw off data points. The trick is to take notice of it, capture it, and then do something with it. It’s the “do something with it” that matters to your success or not.  Your success can be anything that is of value to you. Time, money, weight loss, stock trading, whatever. You just need to start capturing data, evaluate it, and take action on it.

This is where you fail

Many people fail by taking no action on the data they captured and evaluated. They hope that things are going to get better or that things are going to change. Maybe they will, maybe they won’t but you must act on what the data is telling you now. NOW!

My Examples, what Works/Doesn’t Work

  1. My $100 Forex experiment worked really well for a time, then it started to flag. The data was telling me that my trading method was no longer working. Did I listen? Nope. I blew up that account. This didn’t work for me.
  2. Writing RapidMiner Tutorials on this blog ended up getting me a job at RapidMiner. This lead to an amazing career in Data Science. Writing and taking an interest in things works.
  3. Day trading doesn’t work for me. I blow up all the time. What works for me is swing and trend trading. Do more of that and no day trading.

Keep it simple, stupid

The one thing I’ve also learned working at a startup is to keep things simple and stupid. You’re running so fast trying to make your quarter that you have no time for complex processes. Strip things down to their minimum and go as light as you can. This way you can adjust your strategy and make changes quickly, you can do more of what works and jettison what doesn’t.

Migrate away from WordPress to Hugo

I’m no stranger to trying different CMS’s to see which one works best in terms of functionality, speed, and ease of SEO use. Over time – and many mistakes later – I’ve found that Hugo is the best for all three of those criteria. It’s very functional, fast to build, and easy to tweak for SEO. When you couple that together with the AWS backbone, you get a fast loading website where all the static builds happen automatically as soon as you update your Github repository. I’m writing this post on how I migrated away from WordPress to Hugo and used AWS Amplify to host my blog & website.

Introduction – WordPress Years

I started this blog on WordPress and then transferred it to Jekyll, then back to WordPress, and so on. You can read about my ‘passion’ for trying different CMS platforms by checking out my CMS tag, but the reality is that WordPress is really easy for newbies to get up and running quickly. Many popular blogs and sites run WordPress reliably and manage all their content with it. This is why it’s super sticky and it has some great themes to use out of the box. It’s when you want to ‘pimp’ or optimize your WordPress installation where you start running into problems.

For example, it relies on a database backend, and every time you visit the site it dynamically serves up the page. This is all good until you realize that you pay a loading time penalty for it. It can take a while. Plus you have to worry about malicious code injections from hackers that spam your site for the latest penis enhancement drug or Bitcoin scam. These code injections were the main impetus for me to start looking to migrate to a static blog generator.

If you want to do any SEO with WordPress you have to know PHP. I guess that’s ok but I never really cared much for PHP and found it boring. If you’re in that same boat, you’re going to have to use a plugin like Yoast. That’s great until you realize you have to start paying for it to get any benefit out of it. Do you need to modify your footer uniquely? You need to use a plugin. Want to back up your entire website? You have to use another plugin (and pay usually). Before you know it, you have like 10 or more plugins that are costing you money on top of what you pay for hosting.

All these plugins you need end up slowing your site down a lot too, which affects your SEO in a really big way. At the end of the day, you’re out of money and have a slow site. Sure it might look pretty but if you want to grow organic traffic you have to focus on great and valuable content that loads fast, is SEO optimized, costs a hell of a lot less, and makes YOUR life easy.

Hugo and AWS Amplify (and Github)

Let me be the first to warn you. Running Hugo on AWS Amplify (and Github) is not as easy as running WordPress with plugins. If you’re happy with your WordPress infrastructure and process, then just close your browser now and move on. However, if you want to have fine-grained control over your blog/website and feel comfortable doing a bit of ‘hacking,’ then please continue reading.

I just settled on using Hugo to make running my blog easier. I no longer have to deal with plugins and pay for them. Of course, this forces me to be more hands-on with how I want to maximize SEO, how I want to leverage the Indie Web, and even Web Monetization. I also have to do manual backups now instead of using Vaultpress, but with Github, that’s become so easy that it happens on autopilot.

Writing content for a Hugo-generated site is like a dream. The build times are fast, it has great image processing ability, and I can check out how my site will look before I deploy it if I run the development server. So I can do a lot of ‘offline things’ and know that once I push my update to my Github repo, AWS will rebuild and deploy my site automatically.

Of course, I have to pay for the usage of AWS Amplify but it will be A LOT cheaper than using the VPS at Dreamhost ($16/month). Sorry, Dreamhost but I think you’re not getting renewed this year.

Install Hugo

First things first. The move will take some time and if you follow these steps, should go off without a problem. However to prevent any mishaps we’re going to start with setting up a ‘staging’ environment. The goal is to build a clone of your existing WordPress site with Hugo and AWS Amplify so that when it’s all up and running you just ‘cut over’ to it.

Follow these steps:

  1. Download Hugo to your local machine and then read the quickstart introduction on how to build a site
  2. Grab a theme and install it for your Hugo installation
  3. Configure the config.toml file to your liking, especially on how your permalink structure is
  4. Startup your Hugo dev server using hugo server and then navigate to http://localhost:1313

You should see an empty local website running at that web address.

One of the biggest mistakes I made was NOT reading up on the permalink structure setting in the config.toml. I created hours of work for myself in creating canonical aliases for posts that I reorganized. So pay close attention to the configuration of permalinks in your config.toml file.

Sign up for an AWS account

The next step is easy, sign up for an AWS account here. We’ll get back to this later!

Sign up for a Github Account

This step is also easy, you’ll need to sign up for a Github account. You could use Gitlab, Bitbucket, and others, but I use Github for personal and work projects. It’s really easy to use for backing up your code and files BUT a little harder to use. You will need to get familiar with Git and I plan on writing a Git tutorial in the future.

How to Migrate Posts from WordPress to Hugo

Now we get into the fun part, the porting of your WordPress posts and pages to your local Hugo installation! First, you’ll need to extract your WordPress posts from your database. WordPress has a generic Export function but that’s not going to map the WordPress fields to the Hugo fields out of the box. What you will need is to use a Hugo Importer from WordPress.

If the WordPress exporter doesn’t work, try using WordPress to Jekyll exporter plugin AND then the Jekyll to Hugo importer.

Once you have your files, open one up to see if the YAML front end is correctly formatted. The YAML front end is where all the fine-grained control happens for your Hugo-powered website. It’s where you can control all aspects of your SEO metadata as well as options for Table of Contents, Keywords, Categories, Slugs (permalinks), titles, drafts, aliases, etc.

It should look something like this:

title: Post Title
date: 2020-07-21
slug: post-title  << this builds to /content-directory-post-is-in/post-title/index.html
- Word1
- Word2
- Word1
- Word2

The key is to have all your post metadata in between the ---‘s at the top of the post. Note, this will be a markdown file with a .md extension. If you’re not familiar with markdown, you can read up on it here. It takes a moment to get used to markdown but once you do you never have to worry about formatting what you write in the ‘frontend’, markdown handles all that for you when your post gets translated into HTML.

Take all your exported posts to move them to your local instance of Hugo and put them in the /content/posts/ directory. Note, Hugo loves content organization and you should think about if you want to use content folders or not. You can read up on content organization here.

Hugo’s Development Server

Once the posts are in the /content/posts/ directory, you should see Hugo rebuilding the site in your terminal.

It will then generate the website as a live preview. Any editing that’s saved will automatically rebuild and you can see your results in real-time. The great part about this is the debugging part. Since Hugo is written in Go, it doesn’t ‘suffer fools’ and will break if it’s not perfect. This is good and bad, it’s good because it will tell you exactly what went wrong. It’s bad because you need to fix that mistake first before you can build your website in production.

I use this development server ALL the time. It lets me confirm if what I’m about to push to Github is what I want the world to see. This is where I test new partial templates, new shortcodes, and try new Hugo features. The development server will become your best friend and you can read up on some more of its functionality here.

Use Github to backup your site

I use Github for my code and project-related stuff. The concept of Git is just brilliant. It’s version control, process control, and backup all rolled into one. Granted, Github is an extra step in my regular workflow but it makes sure that my blog remains consistent over time. No weird hiccups unless I screw up! If I screw up, Git has a ‘rollback feature’ so I can undo my mistake quickly. The older I get the more I realize that Git is the true power here, especially if you’re in a heavy development environment.

You’re not going to take your entire local installation of Hugo plus your content and sync them with a Github repository. The terminology you will hear is that you’re going to ‘push your code to a remote repo.’ Github is the remote repository where you’re going to push your content and files too.

You can use the git command line but I like to use Github Desktop. Makes things easy!

  1. First, you’re going to create local repository in Github Desktop
  2. You’re going to select the folder of your entire Hugo website, I name my folder neuralmarkettrends.com and when I create a local repo, it automatically names the remote repo the same name.
  3. Then you click ‘Publish’ and it will create the remote repo on Github. Note: your site is NOT backed up yet
  4. To back up the site in your remote repo you will need to create a commit message and then push it
  5. Once you push it to your Github repo, it should populate

Make sure that your repository is marked as private, this way no one can see your posts and stuff.


How to Put Hugo into Production with AWS Amplify

Once I had my blog backed up on Github I just followed the AWS Amplify instructions and generated the site successfully!

There was one big tweak that I made. AWS Amplify uses Hugo version 0.55 and that’s not compatible with many of the better themes. I had to update the version of Hugo the build was using to 0.74.2. It’s not hard, all I had to do was go to the Build Settings and follow these instructions.

End Notes

I migrated to AWS Amplify for selfish reasons. I was about to renew my VPS at DreamHost for over $200 and thought that it was a tad bit high. Using AWS Amplify I expect to pay around $5 a month based on my past traffic, if it gets more active then I’ll pay more but that’s ok.

The added benefit to migrating here is that I get a wicked fast load time on the AWS backbone. I get like 1 second mobile device load times and millisecond desktop device load times. Wowsers!


I realize that doing this migration is tricky for many non-hacker types of people, but you can learn if you want to. One of the things we didn’t go over in this post is using a custom domain with AWS Amplify. That was a bit tricking to set up and I’ll write another post on that shortly. Still, these steps should get you to a mirrored site of your WordPress blog on AWS Amplify.

Installing Ghost CMS on AWS Lightsail

This tutorial is not about porting Hugo posts to the Ghost format but instead, it’s about how I got a custom domain, CDN, and removed the :80 port from a new Ghost install.

First off, setting up the machine on AWS Lightsail is pretty easy. It was as simple as selecting the Ghost install and taking the cheapest instance possible. I think the cost is like $3.50 a month, which is pretty sweet IMHO.

The harder part was then the configuration of everything. When the instance spins up you get a dynamic IP assigned to it. It’s not a pretty custom domain name but rater something like, not easy to remember and it does change when you reboot the instance.

I had to set up a static IP (an added yearly cost) to avoid the IP from changing because I was rebooting this instance a lot.

Once I got the static IP, then I had to attach my custom domain thomasott.io to it. I thought that while I was doing that I should make sure that it’s SSL enabled and that my content would be served through a CDN (Content Delivery Network) for scalability and robustness.

Just those tasks forced me through down a rabbit hole that’s taken me about a day (on and off timewise) to configure my instance. It was fun because I got to tinker with domains, CDNs, and other backend-related stuff, something I normally don’t do.

This tutorial is about how to turn your site into a real Ghost site once you go your AWS Lightsail instance running.

The Instance

I’m not going to go into detail about setting up the instance, but the four screenshots below show you how easy it is.

First click on ‘Create Instance.’

Then select your Instance location.

Then select Ghost.

You’ll notice there are a lot of other things you can install too. My next goal is to set up a WordPress and Django site as a test and work through the machine configurations like this one.

Finally, you select the size of the instance you want. I went with the cheapest option.

That’s really it, you launch it and it goes live.

Logging in to Ghost

This was an annoying part, I didn’t know where to log into the Ghost admin panel at first. The answer is:


But what are the credentials? The Ghost instance is managed by Bitnami and they have specific instructions on how to log into your Ghost instance. You have to SSH into the instance and run a command to get the password. The default username is bitanmi.

Once you SSH into your instance, get the password, then go to:


Then log in and make a new user. That user will be you and don’t forget your login credentials.

Remove Bitnami header

Once I had a user and tested logging in successfully, I then looked to get rid of that annoying Bitnami header. This was easy to do, I just used the bnhelper tool.

All I did was SSH into the instance and run sudo /opt/bitnami/bnhelper-tool

This led me to this screen and I selected Remove the Bitnami Banner.


Set up Static IP on AWS Lightsail

My next step was to attach a static IP to the instance. That was pretty easy too, all I did was navigate to Networking and clicked on Create static IP.

Then I followed the instructions but selecting where my instance was and filling out the rest of the information.

Set up Custom Domain Zone

Now, this part got tricky, setting up the Custom Domain Zone. Here I wanted to attach my AWS registered domain thomasott.io to this static IP.

In order to do that I needed to make sure that the DNS servers that this instance requires are the same DNS servers set up in my Route 53 Hosted Zone.

There are some important things you need to set up in Route 53 to make sure everything resolves correctly. The problem is that you’ll be setting them up a bit out of order.

The key things for Route 53 are this:

  1. Make sure the Domain Name Servers are the same in Route 53 and your Lightsail instance of Ghost
  2. You point an A Name record to the Cloudfront Distribution URL (we haven’t set this up yet)
  3. That your SSL certificate is verified so you can get https (we haven’t set this up yet either)

The problem is that you won’t have all the information to finish this configuration until you set up your CDN and SSL certificates.

Set up the CDN

It’s advisable to serve your images and content through a CDN (Content Delivery Network). Why? Because it helps with load times and uptime. Ever since I switched to a CDN my SEO for my other blog has skyrocketed. You should consider this too.

To set that up, just click on the Create distribution button and

Then follow the instructions by selecting your instance (mine is called Ghost-1).

Once you do that, you’ll get a CloudFront URL where your blog will be served too.

Important! Now you’ll have to navigate back to Route 53. In your Hosted Zone for your custom domain, create an ‘A’ Record but select Alias to Cloudfront distribution.

If you select the correct Instance location, your newly created Cloudfront distribution should populate automatically.


Set SSL Certificates on AWS Lightsail

We’re still not done yet with setting up the custom domain. For that, you’ll need to attach SSL certificates and then reference them back in Route 53.

Click on the Distribution button and navigate down to the enable custom domain option.

Select enable.

Then navigate lower on the page and click Create certificate. Fill in your apex domain name (not http://www.your-custom-domain.com, but custom-domain.com).

Once you create your SSL certificate, it will ask you to verify it. You will need to take that information and create a CNAME record in Route 53. Once you do that, you should be done with all your Custom Domain Name/Zone configurations.

The remaining steps are related to the backend of Ghost and the configuration of the web and production server.

Update Machine Name

SSH into your instance and update your machine name. To do that you’ll need to do the following commands referenced here.

sudo /opt/bitnami/apps/ghost/bnconfig --machine_hostname example.com

where you replace example.com with your custom domain.

Then to make sure it ‘sticks’ when reboot you’ll have to rename the bnconfg file like so:

sudo mv /opt/bitnami/apps/ghost/bnconfig /opt/bitnami/apps/ghost/bnconfig.disabled

Update httpd-vhost.conf

This next step caused me grief because I couldn’t find the damn httpd-vhost.conf file at first.

I finally found it at /opt/bitnami/apps/ghost/conf/

You have to edit this file and update it with your custom domain information. I followed ‘Approach B’ in these instructions.

Then I restarted the webserver.

Update Production settings

Once I got to this point everything started to work well except I had the port :80 appended to my custom domain thomasott.io. it drove me nuts.

It wasn’t until I read this post on how to remove it that I realized there’s this production setting for Ghost.

Once I removed the :80 from the JSON file and restarted the instance, everything worked.

End notes

I’m probably forgetting something because these were small tweaks that were numerous but not hard. I will update this post with any missing information when I remember it.

The Adsense Money Making Experiment

I haven’t updated my readers in a while about how my Adsense experiment is going. I recently made a switch back from Blot.Im to WordPress to turbocharge SEO and Adsense.

In the past two months (September and October) I only made a total of $0.24. This was pitiful indeed, it would take me forever to earn enough revenue to buy a single roll of film!

Optimizing Adsense with WordPress

After migrating BACK to WordPress and installing Google’s Adsense plugin, things are turning around. Just 8 days into November and I’m already up 200% from last month! My impression to click ratio just about doubled.

While this is all good news, there still is a lot of work to do as I clean up my bouncing pages and go through all my posts. There’s a lot of SEO work to be redone but the effort is well worth it. If November turns out to be a good month, then I will start adjusting my experiment goals upwards!

Custom Reports in Google Analytics

Recently I imported some custom reports in Google Analytics that I found online. They have been eye opening indeed!  My most favorite ones are the Profit Index and Time of Day custom reports.

Profit Index

Google Analytics assigns a page value to each and every page you have, provided you use Goals. Without using Goals, this won’t work! In my previous post, I wrote about how I started using Goals to see how readers interacted with my site. I arbitrarily assigned a value of $1 each time the reader clicked on a tag or stayed on a post for more than 5 minutes.

I began searching through Google to see if could find a way to lower my bounce rates because I switched back to WordPress (that’s for another post altogether).  As soon as I switched back, I noticed an increase in bounce rates and that bothered me.

I found out that bounce rates are really just people going to one page (usually my home page) and then dropping off. The majority of the visitors have no desire or incentive to continue through the site. The ones that do usually end up on my Tutorial or Archive page.

In my sleuthing I found something called the Profit Index. This is a fantastic report you can build for you Google Analytics as a custom report. The Profit Index can show you what posts have a high value but have the highest bounce rates! You can also see which pages have the highest Adsense revenue vs bounce rates. Once you know where the problem is, you can work to fix it.


For the most part all my posts are incredibly sticky and OK page value vs bounce rate, but I never dreamed that the Stock Trend Following post has such a high drop off rate.

Time of Day

This custom report is a fun one for me. It let’s me look at what time of day readers come to my site, what day the come to my site, and most importantly what time AND day they come to my site. Originally written by Dan Barker, it’s very enlightening for me!


Over the course of the last 30 days, my most popular visit days have been Thursdays at 11AM, 1PM, and 5PM. The numbers change when I look at them from across the year but Thursday at 11AM appears to be the winner. Is it any wonder why I scheduled this post for today and at this time?

Note: Day 0 is Sunday.

Get Custom Reports in Google Analytics

Getting custom reports in Google Analytics is pretty easy if the creator has shared them. With the exception of the Profit Index, the Time of Day report is shareable and easy to install in your Google Analytics dashboard. You can easily rebuild the Profit Index report by following the instructions on the their website, it’s pretty easy but eye opening!

If you want more reports, just visit this page here. They have some great free ones!


Rebuilding a Blog – Part 4

In this post I wanted to review some goals I created in Google Analytics. These goals were created to see how people are interacting with my site.


I created a total of 7 goals:

The chart shows some interesting results. I have any amazing amount of sticky time and people read more than 1 post. This makes me think I should abandon Google Ads and sell some space on my own.

On top of this, there’s some activity in the use of the search box and people do click on the Tutorial tags. No one cares about the SEO, and RSS.


I like looking at Google Alerts. It’s nice to see that all things are green and there is some activity in new users and page sessions.

Overall, things are improving here but the lack of tag click – except the Tutorials tag – leads me to guess that:

  1. People don’t care about those tags, or
  2. People don’t know enough about navigating to those tags.

Next Steps

I think I will make some small navigation changes to allow users to search via tags OR via the search box. I’ll examine my top read content and determine which articles need to be refreshed” in my next post.

This post on rebuilding a blog is continuation of my previous post.

%d bloggers like this: