• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Cloud

Improving the speed of WordPress

September 22, 2017 by Simon

This post shows my never-ending quest to speed up WordPress for free.

I have used to use WP Total Cache in the past but decided to check out what others recommended, I found this post 6 Best WordPress Caching Plugins Compared. Some WordPress Caching Plugins.

  • W3 Total cache
  • WP Fastest Cache
  • Cache Enable
  • WP Rocket
  • WP Super Cache
  • Etc

What plugin do I use?

Benchmark (No Caching Plugin)

I tested my site before installing a caching plugin with https://www.webpagetest.org/ and my site was loading in 21s (loading over 141 files).

My site loaded in a terrible 21.3 seconds. My blog is hosted on Jumba (Net Registry) on and Ultimate plan for $25 a month.

My site seems to deliver 70% images so I wonder if a page caching plugin can help?

I do run the EWWW Image Optimizer plugin to automatically compress images when I upload them to my site. Read my blog post on the EWWW Image Optimizer here.  I do keep images at a high quality to capture all details.

WP Fastest Cache Plugin

I have decided to try the WP Fastest Cache because it’s source was updated 4 hours ago compared to WP Super Cahches update 5 months ago. Both these plugins offer similar GT Metrix performance improvements and WP Fastest Cache has been tested on WordPress v4.8.

Installing WP Fastest Cache

I looked for the WP Fastest Cache Plugin on the WP plugin directory but it was not there.

I downloaded the latest version WP Fastest Cache from https://wordpress.org/plugins/wp-fastest-cache/

I upload the WP Faster Cache plugin to my site.

I Activated the plugin.

WP Fastest Cache plugin is now installed 🙂

It appears to have auto cached/indexed my site?

Now it’s time to run the same benchmark and see if the site is faster (with the same settings (Singapore chrome))?

1 of 3 test are underway.

WP Fastest Cache Results

Wow WP Fastest Cache loaded my site 2 seconds slower (Try 1 = 23 seconds, Try 2 =  21 seconds and Try 3 =  28 Seconds).

This could have been because of weekend traffic or hosting issues but this was not what I expected.

I disabled the WP Fastest Cache plugin and ran the benchmarks again and it was still 23 seconds (weekend traffic?). I re-enabled WP Fastest Cache and re-ran the test but no improvement.

My bad I think I needed to manually configure the WP Faster Cache plugin by opening the new WP Faster Cache menu on the left-hand side of the WP admin dashboard.

There I enabled caching options in the WP Faster Cache options.

I ran https://www.webpagetest.org tests again and got 16s, 18s and 16s seconds results in three tests and an A on compressed images. It appears you need to manually configure the WP Total Cache plugin after installing it (I missed this step).

I disable WP Fastest Cache and tried the WP Super Cache plugin and the test results were 29s, 24s, 24s (slower than WP Faster Cache). then tried W3 Total cache and the results were ()

I tried the W3 Total Cache plugin and the results were (30s, 16s 26s).

I Tried Autoptimize and it was tested at 45s.

It looks like WP Faster Cache is the fastest, ill turn it back on until try setup a CDN.

Fast Forward to Sept 2017

Since writing this post I have moved away from a shared C-Panel host and have moved my domain to a self-managed Vultr server closer to me, I have moved my email to Google G-Suite. I have learned how to deploy and manage WordPress by command-line tools. I have set up servers on Digital Ocean before but the servers are located in Singapore and not Sydney and latency and scalability was poor. SSL will make sites slower and servers far away will just compound the issues.

Re-enabling the WP Fastest Cache Plugin

I tried reinstalling the WP Fastest Cache plugin and for me, the plugin just slows down my site by 6 seconds.

I opened my NGINX config and got my NGINX user

sudo nano /etc/nginx/nginx.conf

My user is: www-data

I enabled the WP Fastest Cache plugin and ensured the WP Fastest Cache has ownership and access to the cache folder.

sudo chown www-data:www-data /www/wp-content/plugins/cache
sudo chown www-data:www-data /www//wp-content/plugins/cache/all
sudo chmod 755 /www/wp-content/plugins/cache *
sudo chmod 755 /www/wp-content/plugins/cache/all *

Below are the settings I use.

WP Fastest Cache

installing the WP-Optimize Plugin

I recommend setting up WP-Optimize plugin as it will optimize your database and keep things fast, it only saves me a second on my load times but this helps.

WP Optimize

WP-Optimize will allow for to review database optimizations

WP-Optimize database savings

Setting up Nginx GZip Compression

I set up my Nginx config to include

gzip on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

I set the minimum size to gzip too

gzip_min_length 20;

Benchmark with G-Zip, Caching and WP Fastest Cache 

With WP Fastest Cache I now load my site in 13.9 seconds from Singapore. Time to disable WP Fastest cache plugin as it does not seem to be helping without linking to a CDN.

With Cache plugi

Setting up Browser Caching

I also setup browser caching by editing in NginX.

sudo nano /etc/nginx/sites-available/default

Added

location ~*  \.(jpg|jpeg|png|gif|ico|svg|js|css)$ {
        expires 365d;
}

Not sure if caching CSS and JS will cause problems in future?

Benchmark with G-Zip, Caching and without WP Fastest Cache (Singapore)

I re-ran the tests and got 10.9 seconds and got a B for cached content. When in Started on C-Panel I was getting near 30s

Benchmark

Benchmark with G-Zip, Caching and without WP Fastest Cache (Sydney)

I have always benchmarked from Singapore (as Sydney was not an option when I started) but now it is.  Out f curiosity is my website load time in Sydney?

8.2 seconds. Distance does affect performance.

Google Speed Insights

Google has awesome tools to help you increase your benchmark mobile and desktop website speeds and recommend focus areas to resolve problems: https://developers.google.com/speed/pagespeed/insights/

Mobile Speed Score

Desktop Speed Score

Tips

I was getting SVG files failing compassion tests so I added the following under allowed mime types under “http gzip_types” in /etc/nginx/nginx.conf

image/svg+xml text/html+svg

Minifying JS and CSS

This needs to be done and 50% of my site files appears to be CSS and JS related.

It looks like 30%~40 of your sites google speed index is related to minified/combined JS/CSS.

Google Speed Test

I installed the Fast Velocity Minify WordPress plugin.

I ran this to install it from the command line

cd /www/wp-content/plugins#
sudo wget https://downloads.wordpress.org/plugin/fast-velocity-minify.2.2.1.zip
--2017-09-23 19:51:46--  https://downloads.wordpress.org/plugin/fast-velocity-minify.2.2.1.zip
Resolving downloads.wordpress.org (downloads.wordpress.org)... 66.155.40.187, 66.155.40.203, 66.155.40.188, ...
Connecting to downloads.wordpress.org (downloads.wordpress.org)|66.155.40.187|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 821621 (802K) [application/octet-stream]
Saving to: ‘fast-velocity-minify.2.2.1.zip’

fast-velocity-minify.2.2.1.zip 100%[=================================================>] 802.36K   830KB/s    in 1.0s

2017-09-23 19:51:47 (830 KB/s) - ‘fast-velocity-minify.2.2.1.zip’ saved [821621/821621]

Unzip

sudo unzip merge-minify-refresh.zip

I activated the plugin and set some settings

Minify Settings

Verified minify logs

Logs

Google Page Insights can now see the minifies, css, js and html

Minified

Google Page Insights – Possible Optimizations

issues

And Google Ad Words and Google Analytics appear to be holding back Google Page Insight scores

Google adwords and Analytics

I am getting a few false positives with plugins javascript but that can be resolved another day.

Pingdom (Melbourne results)

3.2 seconds, a few false positives though.

Kingdom

I was going to test with https://www.webpagetest.org/ (from Singapore) but the service kept stalling and had too many tests before me (even from Sydney).

Wait

Address First Byte Time (todo)

If I look at the first-byte load results in the waterfall view my site is taking many seconds to deliver the first byte, this lowers the performance scores about 20%. I need to set up a CDN and or configure NGINX following this guide based on this manual configuration entry (I tried some of the Nginx settings but it appears I need to compile some performance settings into Nginx).

CDN (todo)

I am sure a Content Delivery Network (CDN) will help with the whole page deliver and first-byte times but I am trying to milk as much free as possible and limit future costs. A CDN will trigger higher monthly costs (any CDN providers want to donate a temporary pro plan for review purposes).

Misc Speed Articles

  • Yoast has a good site speed article here: https://yoast.com/site-speed-tools-suggestions/
  • Nginx has a good guide on Nginx performance here: https://www.nginx.com/blog/10-tips-for-10x-application-performance/
  • Google PageSpeed tips: https://developers.google.com/speed/docs/insights/rules

Configuring Ubuntu for Performance

Preventing applications swapping for disk (read more here)

sudo nano /etc/sysctl.conf

I added this memory-related setting.

vm.swappiness = 1

This will all but prevent applications writing to disk (swap) when they are not active. I had free memory on my VM so I may as well use it.

I will monitor the free ram after reboot and play with php memory settings.

ram

Setup Lazyload for images in posts

cd /www/wp-content/plugins/
sudo wget https://downloads.wordpress.org/plugin/bj-lazy-load.zip
unzip bj-lazy-load.zip
# activate the plugin

Lazyload Plugin Settings

Lazyload

Placeholder Image ( Image: https://fearby.com/wp-content/uploads/2017/09/placeholder.jpg )

Web Performance Test from Sydney

8.4 seconds ( Score Card F A A A C, was F F F A F ).  I was getting up to 28 second load times with Net Registry C Panel servers.

Static Content is cached but Googe Ad Sence, Google Analytics, and some plugins do block the score. The front page does have some features content that has to be loaded and can’t be minified or cached much.

Sydney Results

It is obvious I need to work in the initial websitee load (DNS, CDN or SSL), there sis  3 seconds I can save here.

3 sec

Configuring PHP for Performance

todo: PHP base config.

todo: PHP caching.

Conclusion

I was expecting WP Fastest Cache to deliver faster speeds but in reality but I am getting 4 seconds faster in WordPress. I was going to configure MaxCDN but they are to expensive. Fast Velocity Minify Plugin is working a treat 🙂

I ended up ditching the shared CPanel hosted domain and setup my own server for WordPress. My site seems a lot faster now. A friend set up CloudFlare with great success, more soon. I blogged about my server setup here.

Adding browser cache and compressing and moving away from CPanel to a self-managed server helped.

The only things to try now is to use a CDN and speed up the delivery of my site and improve the First Byte Time.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.932 added lazy load information (24th Sep 2017)

v1.952 added small changes (23rd Sep 2017)

etc

Filed Under: Blog, Cache, Cloud, Domain, Software, Wordpress Tagged With: cache, cdn, plugin, speed, website, wordpress

Setup Ruby, Rails, Gem and a command line twitter tool to query Twitter on Ubuntu 16.04 via a Twitter App

September 17, 2017 by Simon

Below is how I setup Ruby, Rails, Gem and a command-line twitter tool to query Twitter on Ubuntu 16.04 via a Twitter App

Setup Twitter feed scraping on Ubuntu 16.04

At first, I had no network (I could not ping, run a system update or install packages (even though I had opened the firewall ports and disabled the firewall temporarily)? I fixed this by editing /etc/resolv.conf and added a google DNS entry.

sudo nano /etc/resolv.conf

Added the Google DNS server.

nameserver 8.8.8.8

Bingo, I can now ping and update my system.

Setup Ruby and Pre-Requisites

sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
git clone https://github.com/rbenv/rbenv.git ~/.rbenv
echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
echo 'eval "$(rbenv init -)"' >> ~/.bashrc
exec $SHELL
git clone https://github.com/rbenv/ruby-build.git ~/.rbenv/plugins/ruby-build
echo 'export PATH="$HOME/.rbenv/plugins/ruby-build/bin:$PATH"' >> ~/.bashrc
exec $SHELL
rbenv install 2.4.1
rbenv global 2.4.1

If ruby 2.4.1 fails o install try and install the older ruby 2.2.1

Error

rbenv install 2.4.1
Downloading ruby-2.4.1.tar.bz2...
-> https://cache.ruby-lang.org/pub/ruby/2.4/ruby-2.4.1.tar.bz2
Installing ruby-2.4.1...

BUILD FAILED (Ubuntu 16.04 using ruby-build 20170914-2-ge40cd1f)
...

Optional Troubleshooting: Install Ruby 2.2.1 (if 2.4.1 fails to install)

rbenv install 2.2.1
rbenv global 2.2.1

Optional Troubleshooting: Ruby 2.2.1 is no longer recommended

rbenv install 2.2.1
Downloading ruby-2.2.1.tar.bz2...
-> https://cache.ruby-lang.org/pub/ruby/2.2/ruby-2.2.1.tar.bz2
Installing ruby-2.2.1...

WARNING: ruby-2.2.1 is nearing its end of life.
It only receives critical security updates, no bug fixes.
...

Or

Optional Install: Ruby 2.4.0

mkdir ~/.rbenv/cache
# download manually ruby file
wget https://cache.ruby-lang.org/pub/ruby/2.4/ruby-2.4.0.tar.bz2
# move file
mv ruby-2.4.0.tar.bz2 ~/.rbenv/cache
# do the install
rbenv install 2.4.0

Hopefully, Ruby is now installed

ruby -v
ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]

Install Bundler Gem

gem install bundler

Install Rails

curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -

I skipped the Install (as I had node installed).

node -v
v6.11.3

Continue with rails install

gem install rails -v 5.1.3
rbenv rehash

Rails is installed

rails -v
Rails 5.1.3

Install the t gem (read more at the https://github.com/sferik/t GitHub repository)

gem install t

You can now authorize the t gem to use your twitter.

t authorize
Welcome! Before you can use t, you'll first need to register an
application with Twitter. Just follow the steps below:
  1. Sign in to the Twitter Application Management site and click
     "Create New App".
  2. Complete the required fields and submit the form.
     Note: Your application must have a unique name.
  3. Go to the Permissions tab of your application, and change the
     Access setting to "Read, Write and Access direct messages".
  4. Go to the Keys and Access Tokens tab to view the consumer key
     and secret which you'll need to copy and paste below when
     prompted.

But first let’s create a twitter app.

Goto https://apps.twitter.com/, login and create an app.

Twitter App

Twitter will provide app details when you create the app.

Created Twitter App

Go to Permissions and set “Read, Write and Access direct messages” and save changes.

Twitter App Permissions

Linking the Twitter app to t Gem

Now that the Twitter app is created let’s activate and link it to the t gem,

Run the following command to start the authorization process.

t authorize

The authorization process is dead simple, just follow the on-screen prompts.

Welcome! Before you can use t, you'll first need to register an
application with Twitter. Just follow the steps below:
  1. Sign in to the Twitter Application Management site and click
     "Create New App".
  2. Complete the required fields and submit the form.
     Note: Your application must have a unique name.
  3. Go to the Permissions tab of your application, and change the
     Access setting to "Read, Write and Access direct messages".
  4. Go to the Keys and Access Tokens tab to view the consumer key
     and secret which you'll need to copy and paste below when
     prompted.

Press [Enter] to open the Twitter Developer site.

Open: https://apps.twitter.com
Enter your API key: ########################
Enter your API secret: ################################################

In a moment, you will be directed to the Twitter app authorization page.
Perform the following steps to complete the authorization process:
  1. Sign in to Twitter.
  2. Press "Authorize app".
  3. Copy and paste the supplied PIN below when prompted.

Press [Enter] to open the Twitter app authorization page.

Open: https://api.twitter.com/oauth/authorize?oauth_callback=oob&oauth_consumer_key=################################################&oauth_signature=################################################&oauth_signature_method=HMAC-SHA1&oauth_timestamp=123456789&oauth_token=########################&oauth_version=1.0
Enter the supplied PIN: ######
Authorization successful.

This was easy.

fyi: Authorization (Twitter authorize app screenshot)

Authorize Twitter

fyi: Authorization (Twitter authorize app pin screenshot)

Auhorize Pin

Using T

T Help

t help
Commands:
  t accounts                          # List accounts
  t authorize                         # Allows an application to request user authorization
  t block USER [USER...]              # Block users.
  t delete SUBCOMMAND ...ARGS         # Delete Tweets, Direct Messages, etc.
  t direct_messages                   # Returns the 20 most recent Direct Messages sent to you.
  t direct_messages_sent              # Returns the 20 most recent Direct Messages you've sent.
  t dm USER MESSAGE                   # Sends that person a Direct Message.
  t does_contain [USER/]LIST USER     # Find out whether a list contains a user.
  t does_follow USER [USER]           # Find out whether one user follows another.
  t favorite TWEET_ID [TWEET_ID...]   # Marks Tweets as favorites.
  t favorites [USER]                  # Returns the 20 most recent Tweets you favorited.
  t follow USER [USER...]             # Allows you to start following users.
  t followers [USER]                  # Returns a list of the people who follow you on Twitter.
  t followings [USER]                 # Returns a list of the people you follow on Twitter.
  t followings_following USER [USER]  # Displays your friends who follow the specified user.
  t friends [USER]                    # Returns the list of people who you follow and follow you back.
  t groupies [USER]                   # Returns the list of people who follow you but you don't follow back.
  t help [COMMAND]                    # Describe available commands or one specific command
  t intersection USER [USER...]       # Displays the intersection of users followed by the specified users.
  t leaders [USER]                    # Returns the list of people who you follow but don't follow you back.
  t list SUBCOMMAND ...ARGS           # Do various things with lists.
  t lists [USER]                      # Returns the lists created by a user.
  t matrix                            # Unfortunately, no one can be told what the Matrix is. You have to see it for y...
  t mentions                          # Returns the 20 most recent Tweets mentioning you.
  t mute USER [USER...]               # Mute users.
  t muted [USER]                      # Returns a list of the people you have muted on Twitter.
  t open USER                         # Opens that user's profile in a web browser.
  t reach TWEET_ID                    # Shows the maximum number of people who may have seen the specified tweet in th...
  t reply TWEET_ID [MESSAGE]          # Post your Tweet as a reply directed at another person.
  t report_spam USER [USER...]        # Report users for spam.
  t retweet TWEET_ID [TWEET_ID...]    # Sends Tweets to your followers.
  t retweets [USER]                   # Returns the 20 most recent Retweets by a user.
  t retweets_of_me                    # Returns the 20 most recent Tweets of the authenticated user that have been ret...
  t ruler                             # Prints a 140-character ruler
  t search SUBCOMMAND ...ARGS         # Search through Tweets.
  t set SUBCOMMAND ...ARGS            # Change various account settings.
  t status TWEET_ID                   # Retrieves detailed information about a Tweet.
  t stream SUBCOMMAND ...ARGS         # Commands for streaming Tweets.
  t timeline [USER]                   # Returns the 20 most recent Tweets posted by a user.
  t trend_locations                   # Returns the locations for which Twitter has trending topic information.
  t trends [WOEID]                    # Returns the top 50 trending topics.
  t unfollow USER [USER...]           # Allows you to stop following users.
  t update [MESSAGE]                  # Post a Tweet.
  t users USER [USER...]              # Returns a list of users you specify.
  t version                           # Show version.
  t whoami                            # Retrieves profile information for the authenticated user.
  t whois USER                        # Retrieves profile information for the user.

Options:
  -C, [--color=COLOR]   # Control how color is used in output
                        # Default: auto
                        # Possible values: icon, auto, never
  -P, [--profile=FILE]  # Path to RC file
                        # Default: /root/.trc

Show linked twitter accounts with t

t accounts
yourappnamehere
  ###################### (active)

Show authorized twitter accounts

t set active yourtwitterappnamehere ########################
Active account has been updated to yourtwitterappnamehere.

Using t to query a Twitter user

t whois @fearbysoftware
ID           1468627891
Since        May 30  2013 (4 years ago)
Last update  Editing remote files locally with sublime text editor over ssh https://t.co/k5qSnHmUrP #VoteYes (7 hours ago)
Screen name  @FearbySoftware
Name         Simon Fearby
Tweets       4,797
Favorites    940
Listed       88
Following    1,933
Followers    616
Bio          Developing augmented reality mobile apps, websites, ardrino and raspberry pi code/circuits etc. Tweets are my own not my employer. Blog at https://t.co/Azo81pi8Yt
Location     Tamworth NSW, Australia
URL          http://www.fearby.com

Search Twitter for “fearby”

t search all "lang:en fearby"

Output:

t search all "lang:en fearby"

@FearbySoftware
@troyhunt Google AdWords have worked for me https://t.co/KGZAd0sWkG

@FearbySoftware
Blogged setting up my own Ubuntu server to replace Cpanel for $2.5 a month https://t.co/GZCIMesaqJ

@FearbySoftware
Blogged Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute https://t.co/QWiyR2I9ur

@MedinaSports
JV boys ⚽️ tied Roy-Hart 1-1. AJ Seefeldt scored the lone goal w/ 20' left in regulation. Zach Fike & Cooper Fearby
made great saves in goal

@FearbySoftware
Today's SEO experiment : Not pimping blog posts results in half the user hits and impressions. https://t.co/Q9eCoUZy9n

@FearbySoftware
@0xDUDE any advice on security my MongoDB more? Need whitlist IP, use the non standard port and have usr/pwd
https://t.co/5TEDz8LCJo

@FearbySoftware
Creating and configuring a CentOS server on Digital Ocean https://t.co/aI3FYKSFQC

@FearbySoftware
Self Service Status Pages https://t.co/F6ZjN2sdfM

@FearbySoftware
Alibaba Cloud how good is it? https://t.co/YbuWgvyDz8

@breakingnewsng_
IGBONLA SIX: Four of freed students resume studies - …Say there’s nothing to fearBy Monsuru Olowoopejolagos—Two...
https://t.co/sosKnSpT3h

@AFairymary
RT @AFairymary: Congratulations to William Fearby, Author of the Month. https://t.co/jtP0Sn0QCl

@FearbySoftware
I guess I need to get an #iPhoneX to develop apps on an post updates to my free dev blog https://t.co/9x5TFARLCt
#apple #iOS11 #AppleEvent

@fearby_nick
RT @ndonnelly88_: @NJDevils How many retweets for free season tickets?

@PrincessMutanu
RT @CBooksFree: $0.99—Imagine Your Life Without Fear—by Max Lucado https://t.co/pUVi2wWM1J https://t.co/7YakQTjzZf

@corund
RT @FearbySoftware: Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute https://t.co/QWiyR2I9ur
#free #SSL #website #wordpress #nodejs
...

Search Twitter for “fearby” (max 10) and output at CSV

t search all "lang:en fearby" --csv

See more commands here: https://github.com/sferik/t

This is great, I can interact with Twitter from the command line and apps without having to go full REST API and OAUTH development.

Calling T from NodeJS

Read the guide here on calling T from NodeJS.

Calling T from php (Under construction)

Coming soon (this PHP section is under development)

You may need to exclude “pcntl_exec” from being blocked in “php.ini” under “disable_functions”

find your php.ini by typing

find / -iname "php.ini"

Restart php

sudo service php7.0-fpm restart
php7.0-fpm stop/waiting
php7.0-fpm start/running, process #####
[email protected]:/www# service php7.0-fpm status
php7.0-fpm start/running, process #####

This section is under development,

Todo: Security.

Need a server?

Set up a Server on Vultr here for as low as $2.5 a month or set up a Server on Digital Ocean (and get the first 2 months free ($5/m server)). I have a guide on setting up a Vultr server here or Digital Ocean server here.  Don’t forget to add a free LetsEncrypt SSL Certificate and secure the server (read more here and here).

Still here, read more articles here or use the form below to ask a question or recommend an article.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Version History

v1.3.2 added Ruby 2.2.0 install info (29th Sep 2017)

v1.3.1 added Ruby 2.2.1 install info (27th Sep 2017)

v1.3 Querying T in NodeJS

Filed Under: Cloud, Firewall, Twitter, Website Tagged With: command line, gem, rails, ruy

Run an Ubuntu VM system audit with Lynis

September 11, 2017 by Simon

Following on from my Securing Ubuntu in the cloud blog post I have installed Lynis open source security audit tool to check out to the security of my server in the cloud.

Lynis is an open source security auditing tool. Used by system administrators, security professionals, and auditors, to evaluate the security defences of their Linux and Unix-based systems. It runs on the host itself, so it performs more extensive security scans than vulnerability scanners. https://cisofy.com/lynis and https://github.com/CISOfy/lynis.

It is easy to setup a server in the cloud (create a server on Vultr or Digital Ocean here). Guides on setting up servers exist ( setup up a Vultr VM and configure it and digital ocean server) but how about securing it? You can install a LetsEncrypt SSL certificate in minutes or setup Content Security Policy and Public Key Pinning but don’t forget to get an external in-depth review of the security of your server(s).

Lynis Security Auditing Tool

Preparing install location (for Lynis)

cd /
mkdir utils
cd utils/

Install Lynis

sudo git clone https://www.github.com/CISOfy/lynis
Cloning into 'lynis'...
remote: Counting objects: 8357, done.
remote: Compressing objects: 100% (45/45), done.
remote: Total 8357 (delta 28), reused 42 (delta 17), pack-reused 8295
Receiving objects: 100% (8357/8357), 3.94 MiB | 967.00 KiB/s, done.
Resolving deltas: 100% (6121/6121), done.
Checking connectivity... done.

Running a Lynus system scan

./lynis audit system -Q

Lynis Results 1/3 Output (removed sensitive output)

[ Lynis 2.5.5 ]

################################################################################
  Lynis comes with ABSOLUTELY NO WARRANTY. This is free software, and you are
  welcome to redistribute it under the terms of the GNU General Public License.
  See the LICENSE file for details about using this software.

  2007-2017, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)
################################################################################


[+] Initializing program
------------------------------------
- Detecting OS...  [ DONE ]
- Checking profiles... [ DONE ]

  ---------------------------------------------------
  Program version:           2.5.5
  Operating system:          Linux
  Operating system name:     Ubuntu Linux
  Operating system version:  16.04
  Kernel version:            4.4.0
  Hardware platform:         x86_64
  Hostname:                  yourservername
  ---------------------------------------------------
  Profiles:                  /linis/lynis/default.prf
  Log file:                  /var/log/lynis.log
  Report file:               /var/log/lynis-report.dat
  Report version:            1.0
  Plugin directory:          ./plugins
  ---------------------------------------------------
  Auditor:                   [Not Specified]
  Test category:             all
  Test group:                all
  ---------------------------------------------------
- Program update status...  [ NO UPDATE ]

[+] System Tools
------------------------------------
- Scanning available tools...
- Checking system binaries...

[+] Plugins (phase 1)
------------------------------------
: plugins have more extensive tests and may take several minutes to complete - Plugin pam
    [..]
- Plugin systemd
    [................]

[+] Boot and services
------------------------------------
- Service Manager [ systemd ]
- Checking UEFI boot [ DISABLED ]
- Checking presence GRUB [ OK ]
- Checking presence GRUB2 [ FOUND ]
- Checking for password protection [ OK ]
- Check running services (systemctl) [ DONE ]
: found 24 running services
- Check enabled services at boot (systemctl) [ DONE ]
: found 30 enabled services
- Check startup files (permissions) [ OK ]

[+] Kernel
------------------------------------
- Checking default run level [ RUNLEVEL 5 ]
- Checking CPU support (NX/PAE)
 support: PAE and/or NoeXecute supported [ FOUND ]
- Checking kernel version and release [ DONE ]
- Checking kernel type [ DONE ]
- Checking loaded kernel modules [ DONE ]
active modules
- Checking Linux kernel configuration file [ FOUND ]
- Checking default I/O kernel scheduler [ FOUND ]
- Checking for available kernel update [ OK ]
- Checking core dumps configuration [ DISABLED ]
- Checking setuid core dumps configuration [ PROTECTED ]
- Check if reboot is needed [ NO ]

[+] Memory and Processes
------------------------------------
- Checking /proc/meminfo [ FOUND ]
- Searching for dead/zombie processes [ OK ]
- Searching for IO waiting processes [ OK ]

[+] Users, Groups and Authentication
------------------------------------
- Administrator accounts [ OK ]
- Unique UIDs [ OK ]
- Consistency of group files (grpck) [ OK ]
- Unique group IDs [ OK ]
- Unique group names [ OK ]
- Password file consistency [ OK ]
- Query system users (non daemons) [ DONE ]
- NIS+ authentication support [ NOT ENABLED ]
- NIS authentication support [ NOT ENABLED ]
- sudoers file [ FOUND ]
- Check sudoers file permissions [ OK ]
- PAM password strength tools [ OK ]
- PAM configuration files (pam.conf) [ FOUND ]
- PAM configuration files (pam.d) [ FOUND ]
- PAM modules [ FOUND ]
- LDAP module in PAM [ NOT FOUND ]
- Accounts without expire date [ OK ]
- Accounts without password [ OK ]
- Checking user password aging (minimum) [ DISABLED ]
- User password aging (maximum) [ DISABLED ]
- Checking expired passwords [ OK ]
- Checking Linux single user mode authentication [ OK ]
- Determining default umask
- umask (/etc/profile) [ NOT FOUND ]
- umask (/etc/login.defs) [ SUGGESTION ]
- umask (/etc/init.d/rc) [ SUGGESTION ]
- LDAP authentication support [ NOT ENABLED ]
- Logging failed login attempts [ ENABLED ]

[+] Shells
------------------------------------
- Checking shells from /etc/shells
: found 6 shells (valid shells: 6).
- Session timeout settings/tools [ NONE ]
- Checking default umask values
- Checking default umask in /etc/bash.bashrc [ NONE ]
- Checking default umask in /etc/profile [ NONE ]

[+] File systems
------------------------------------
- Checking mount points
- Checking /home mount point [ SUGGESTION ]
- Checking /tmp mount point [ SUGGESTION ]
- Checking /var mount point [ SUGGESTION ]
- Query swap partitions (fstab) [ NONE ]
- Testing swap partitions [ OK ]
- Testing /proc mount (hidepid) [ SUGGESTION ]
- Checking for old files in /tmp [ OK ]
- Checking /tmp sticky bit [ OK ]
- ACL support root file system [ ENABLED ]
- Mount options of / [ NON DEFAULT ]
- Checking Locate database [ FOUND ]
- Disable kernel support of some filesystems
- Discovered kernel modules: cramfs freevxfs hfs hfsplus jffs2 udf 

[+] Storage
------------------------------------
- Checking usb-storage driver (modprobe config) [ NOT DISABLED ]
- Checking USB devices authorization [ ENABLED ]
- Checking firewire ohci driver (modprobe config) [ DISABLED ]

[+] NFS
------------------------------------
- Check running NFS daemon [ NOT FOUND ]

[+] Name services
------------------------------------
- Searching DNS domain name [ UNKNOWN ]
- Checking /etc/hosts
- Checking /etc/hosts (duplicates) [ OK ]
- Checking /etc/hosts (hostname) [ OK ]
- Checking /etc/hosts (localhost) [ SUGGESTION ]
- Checking /etc/hosts (localhost to IP) [ OK ]

[+] Ports and packages
------------------------------------
- Searching package managers
- Searching dpkg package manager [ FOUND ]
- Querying package manager
- Query unpurged packages [ NONE ]
- Checking security repository in sources.list file [ OK ]
- Checking APT package database [ OK ]
- Checking vulnerable packages [ OK ]
- Checking upgradeable packages [ SKIPPED ]
- Checking package audit tool [ INSTALLED ]

[+] Networking
------------------------------------
- Checking IPv6 configuration [ ENABLED ]
 method [ AUTO ]
 only [ NO ]
- Checking configured nameservers
- Testing nameservers
: 108.xx.xx.xx [ OK ]
: 2001:xxx:xxx:xxx::6 [ OK ]
- Minimal of 2 responsive nameservers [ OK ]
- Checking default gateway [ DONE ]
- Getting listening ports (TCP/UDP) [ DONE ]
* Found 18 ports
- Checking promiscuous interfaces [ OK ]
- Checking waiting connections [ OK ]
- Checking status DHCP client [ NOT ACTIVE ]
- Checking for ARP monitoring software [ NOT FOUND ]

[+] Printers and Spools
------------------------------------
- Checking cups daemon [ NOT FOUND ]
- Checking lp daemon [ NOT RUNNING ]

[+] Software: e-mail and messaging
------------------------------------
- Sendmail status [ RUNNING ]

[+] Software: firewalls
------------------------------------
- Checking iptables kernel module [ FOUND ]
- Checking iptables policies of chains [ FOUND ]
- Checking for empty ruleset [ OK ]
- Checking for unused rules [ FOUND ]
- Checking host based firewall [ ACTIVE ]

[+] Software: webserver
------------------------------------
- Checking Apache (binary /usr/sbin/apache2) [ FOUND ]
: No virtual hosts found
* Loadable modules [ FOUND (106) ]
- Found 106 loadable modules 
- anti-DoS/brute force [ OK ]
- web application firewall [ OK ]
- Checking nginx [ FOUND ]
- Searching nginx configuration file [ FOUND ]
- Found nginx includes [ 2 FOUND ]
- Parsing configuration options
- /etc/nginx/nginx.conf
- /etc/nginx/sites-enabled/default
- SSL configured [ YES ]
- Ciphers configured [ YES ]
- Prefer server ciphers [ YES ]
- Protocols configured [ YES ]
- Insecure protocols found [ NO ]
- Checking log file configuration
- Missing log files (access_log) [ NO ]
- Disabled access logging [ NO ]
- Missing log files (error_log) [ NO ]
- Debugging mode on error_log [ NO ]

[+] SSH Support
------------------------------------
- Checking running SSH daemon [ FOUND ]
- Searching SSH configuration [ FOUND ]
- SSH option: AllowTcpForwarding [ SUGGESTION ]
- SSH option: ClientAliveCountMax [ SUGGESTION ]
- SSH option: ClientAliveInterval [ OK ]
- SSH option: Compression [ SUGGESTION ]
- SSH option: FingerprintHash [ OK ]
- SSH option: GatewayPorts [ OK ]
- SSH option: IgnoreRhosts [ OK ]
- SSH option: LoginGraceTime [ OK ]
- SSH option: LogLevel [ SUGGESTION ]
- SSH option: MaxAuthTries [ SUGGESTION ]
- SSH option: MaxSessions [ SUGGESTION ]
- SSH option: PermitRootLogin [ SUGGESTION ]
- SSH option: PermitUserEnvironment [ OK ]
- SSH option: PermitTunnel [ OK ]
- SSH option: Port [ SUGGESTION ]
- SSH option: PrintLastLog [ OK ]
- SSH option: Protocol [ OK ]
- SSH option: StrictModes [ OK ]
- SSH option: TCPKeepAlive [ SUGGESTION ]
- SSH option: UseDNS [ OK ]
- SSH option: VerifyReverseMapping [ NOT FOUND ]
- SSH option: X11Forwarding [ SUGGESTION ]
- SSH option: AllowAgentForwarding [ SUGGESTION ]
- SSH option: AllowUsers [ NOT FOUND ]
- SSH option: AllowGroups [ NOT FOUND ]

[+] SNMP Support
------------------------------------
- Checking running SNMP daemon [ NOT FOUND ]

[+] Databases
------------------------------------
- MySQL process status [FOUND ]

[+] LDAP Services
------------------------------------
- Checking OpenLDAP instance [ NOT FOUND ]

[+] PHP
------------------------------------
- Checking PHP [ FOUND ]
- Checking PHP disabled functions [ FOUND ]
- Checking expose_php option [ OFF ]
- Checking enable_dl option [ OFF ]
- Checking allow_url_fopen option [ ON ]
- Checking allow_url_include option [ OFF ]
- Checking PHP suhosin extension status [ OK ]
- Suhosin simulation mode status [ OK ]

[+] Squid Support
------------------------------------
- Checking running Squid daemon [ NOT FOUND ]

[+] Logging and files
------------------------------------
- Checking for a running log daemon [ OK ]
- Checking Syslog-NG status [ NOT FOUND ]
- Checking systemd journal status [ FOUND ]
- Checking Metalog status [ NOT FOUND ]
- Checking RSyslog status [ FOUND ]
- Checking RFC 3195 daemon status [ NOT FOUND ]
- Checking minilogd instances [ NOT FOUND ]
- Checking logrotate presence [ OK ]
- Checking log directories (static list) [ DONE ]
- Checking open log files [ DONE ]
- Checking deleted files in use [ FILES FOUND ]

[+] Insecure services
------------------------------------
- Checking inetd status [ NOT ACTIVE ]

[+] Banners and identification
------------------------------------
- /etc/issue [ FOUND ]
- /etc/issue contents [ OK ]
- /etc/issue.net [ FOUND ]
- /etc/issue.net contents [ OK ]

[+] Scheduled tasks
------------------------------------
- Checking crontab/cronjob [ DONE ]
- Checking atd status [ RUNNING ]
- Checking at users [ DONE ]
- Checking at jobs [ NONE ]

[+] Accounting
------------------------------------
- Checking accounting information [ NOT FOUND ]
- Checking sysstat accounting data [ NOT FOUND ]
- Checking auditd [ NOT FOUND ]

[+] Time and Synchronization
------------------------------------
- NTP daemon found: ntpd [ FOUND ]
- NTP daemon found: systemd (timesyncd) [ FOUND ]
- Checking for a running NTP daemon or client [ OK ]
- Checking valid association ID's [ FOUND ]
- Checking high stratum ntp peers [ OK ]
- Checking unreliable ntp peers [ FOUND ]
- Checking selected time source [ OK ]
- Checking time source candidates [ OK ]
- Checking falsetickers [ OK ]
- Checking NTP version [ FOUND ]

[+] Cryptography
------------------------------------
- Checking for expired SSL certificates [0/1] [ NONE ]

[+] Virtualization
------------------------------------

[+] Containers
------------------------------------

[+] Security frameworks
------------------------------------
- Checking presence AppArmor [ FOUND ]
- Checking AppArmor status [ ENABLED ]
- Checking presence SELinux [ NOT FOUND ]
- Checking presence grsecurity [ NOT FOUND ]
- Checking for implemented MAC framework [ OK ]

[+] Software: file integrity
------------------------------------
- Checking file integrity tools
- Checking presence integrity tool [ NOT FOUND ]

[+] Software: System tooling
------------------------------------
- Checking automation tooling
- Automation tooling [ NOT FOUND ]
- Checking presence of Fail2ban [ FOUND ]
- Checking Fail2ban jails [ ENABLED ]
- Checking for IDS/IPS tooling [ FOUND ]

[+] Software: Malware
------------------------------------

[+] File Permissions
------------------------------------
- Starting file permissions check
/root/.ssh [ OK ]

[+] Home directories
------------------------------------
- Checking shell history files [ OK ]

[+] Kernel Hardening
------------------------------------
- Comparing sysctl key pairs with scan profile
- fs.protected_hardlinks (exp: 1) [ OK ]
- fs.protected_symlinks (exp: 1) [ OK ]
- fs.suid_dumpable (exp: 0) [ DIFFERENT ]
- kernel.core_uses_pid (exp: 1) [ DIFFERENT ]
- kernel.ctrl-alt-del (exp: 0) [ OK ]
- kernel.dmesg_restrict (exp: 1) [ DIFFERENT ]
- kernel.kptr_restrict (exp: 2) [ DIFFERENT ]
- kernel.randomize_va_space (exp: 2) [ OK ]
- kernel.sysrq (exp: 0) [ DIFFERENT ]
- net.ipv4.conf.all.accept_redirects (exp: 0) [ OK ]
- net.ipv4.conf.all.accept_source_route (exp: 0) [ OK ]
- net.ipv4.conf.all.bootp_relay (exp: 0) [ OK ]
- net.ipv4.conf.all.forwarding (exp: 0) [ OK ]
- net.ipv4.conf.all.log_martians (exp: 1) [ DIFFERENT ]
- net.ipv4.conf.all.mc_forwarding (exp: 0) [ OK ]
- net.ipv4.conf.all.proxy_arp (exp: 0) [ OK ]
- net.ipv4.conf.all.rp_filter (exp: 1) [ OK ]
- net.ipv4.conf.all.send_redirects (exp: 0) [ DIFFERENT ]
- net.ipv4.conf.default.accept_redirects (exp: 0) [ OK ]
- net.ipv4.conf.default.accept_source_route (exp: 0) [ OK ]
- net.ipv4.conf.default.log_martians (exp: 1) [ DIFFERENT ]
- net.ipv4.icmp_echo_ignore_broadcasts (exp: 1) [ OK ]
- net.ipv4.icmp_ignore_bogus_error_responses (exp: 1) [ OK ]
- net.ipv4.tcp_syncookies (exp: 1) [ DIFFERENT ]
- net.ipv4.tcp_timestamps (exp: 0) [ DIFFERENT ]
- net.ipv6.conf.all.accept_redirects (exp: 0) [ OK ]
- net.ipv6.conf.all.accept_source_route (exp: 0) [ OK ]
- net.ipv6.conf.default.accept_redirects (exp: 0) [ OK ]
- net.ipv6.conf.default.accept_source_route (exp: 0) [ OK ]

[+] Hardening
------------------------------------
- Installed compiler(s) [ FOUND ]
- Installed malware scanner [ NOT FOUND ]

[+] Custom Tests
------------------------------------
- Running custom tests...  [ NONE ]

[+] Plugins (phase 2)
------------------------------------
- Plugins (phase 2) [ DONE ]

================================================================================

...

Lynis Results 2/3 – Warnings

  Warnings (1):
  ----------------------------
  ! Found one or more vulnerable packages. [REMOVED-FIXED] 
      https://cisofy.com/controls/REMOVED-FIXED/
...

I resolved the only warning by typing

apt-get update
apt-get upgrade
shutdown -r now

After updating the Lynis system scan I re-ran the text and got

 -[ Lynis 2.5.5 Results ]-

  Great, no warnings

Lynis Results 3/3 – Suggestions

  Suggestions (44):
  ----------------------------
  * Set a password on GRUB bootloader to prevent altering boot configuration (e.g. boot in single user mode without password) [BOOT-5122] 
      https://cisofy.com/controls/BOOT-5122/

  * Configure minimum password age in /etc/login.defs [AUTH-9286] 
      https://cisofy.com/controls/AUTH-9286/

  * Configure maximum password age in /etc/login.defs [AUTH-9286] 
      https://cisofy.com/controls/AUTH-9286/

  * Default umask in /etc/login.defs could be more strict like 027 [AUTH-9328] 
      https://cisofy.com/controls/AUTH-9328/

  * Default umask in /etc/init.d/rc could be more strict like 027 [AUTH-9328] 
      https://cisofy.com/controls/AUTH-9328/

  * To decrease the impact of a full /home file system, place /home on a separated partition [FILE-6310] 
      https://cisofy.com/controls/FILE-6310/

  * To decrease the impact of a full /tmp file system, place /tmp on a separated partition [FILE-6310] 
      https://cisofy.com/controls/FILE-6310/

  * To decrease the impact of a full /var file system, place /var on a separated partition [FILE-6310] 
      https://cisofy.com/controls/FILE-6310/

  * Disable drivers like USB storage when not used, to prevent unauthorized storage or data theft [STRG-1840] 
      https://cisofy.com/controls/STRG-1840/

  * Check DNS configuration for the dns domain name [NAME-4028] 
      https://cisofy.com/controls/NAME-4028/

  * Split resolving between localhost and the hostname of the system [NAME-4406] 
      https://cisofy.com/controls/NAME-4406/

  * Install debsums utility for the verification of packages with known good database. [PKGS-7370] 
      https://cisofy.com/controls/PKGS-7370/

  * Update your system with apt-get update, apt-get upgrade, apt-get dist-upgrade and/or unattended-upgrades [PKGS-7392] 
      https://cisofy.com/controls/PKGS-7392/

  * Install package apt-show-versions for patch management purposes [PKGS-7394] 
      https://cisofy.com/controls/PKGS-7394/

  * Consider running ARP monitoring software (arpwatch,arpon) [NETW-3032] 
      https://cisofy.com/controls/NETW-3032/

  * Check iptables rules to see which rules are currently not used [FIRE-4513] 
      https://cisofy.com/controls/FIRE-4513/

  * Install Apache mod_evasive to guard webserver against DoS/brute force attempts [HTTP-6640] 
      https://cisofy.com/controls/HTTP-6640/

  * Install Apache modsecurity to guard webserver against web application attacks [HTTP-6643] 
      https://cisofy.com/controls/HTTP-6643/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : AllowTcpForwarding (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : ClientAliveCountMax (3 --> 2)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : Compression (DELAYED --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : LogLevel (INFO --> VERBOSE)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : MaxAuthTries (2 --> 1)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : MaxSessions (10 --> 2)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : PermitRootLogin (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : Port (22 --> )
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : TCPKeepAlive (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : X11Forwarding (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Consider hardening SSH configuration [SSH-7408] 
    - Details  : AllowAgentForwarding (YES --> NO)
      https://cisofy.com/controls/SSH-7408/

  * Change the allow_url_fopen line to: allow_url_fopen = Off, to disable downloads via PHP [PHP-2376] 
      https://cisofy.com/controls/PHP-2376

  * Check what deleted files are still in use and why. [LOGG-2190] 
      https://cisofy.com/controls/LOGG-2190/

  * Add a legal banner to /etc/issue, to warn unauthorized users [BANN-7126] 
      https://cisofy.com/controls/BANN-7126/

  * Add legal banner to /etc/issue.net, to warn unauthorized users [BANN-7130] 
      https://cisofy.com/controls/BANN-7130/

  * Enable process accounting [ACCT-9622] 
      https://cisofy.com/controls/ACCT-9622/

  * Enable sysstat to collect accounting (no results) [ACCT-9626] 
      https://cisofy.com/controls/ACCT-9626/

  * Enable auditd to collect audit information [ACCT-9628] 
      https://cisofy.com/controls/ACCT-9628/

  * Check ntpq peers output for unreliable ntp peers and correct/replace them [TIME-3120] 
      https://cisofy.com/controls/TIME-3120/

  * Install a file integrity tool to monitor changes to critical and sensitive files [FINT-4350] 
      https://cisofy.com/controls/FINT-4350/

  * Determine if automation tools are present for system management [TOOL-5002] 
      https://cisofy.com/controls/TOOL-5002/

  * One or more sysctl values differ from the scan profile and could be tweaked [KRNL-6000] 
      https://cisofy.com/controls/KRNL-6000/

  * Harden compilers like restricting access to root user only [HRDN-7222] 
      https://cisofy.com/controls/HRDN-7222/

  * Harden the system by installing at least one malware scanner, to perform periodic file system scans [HRDN-7230] 
    - Solution : Install a tool like rkhunter, chkrootkit, OSSEC
      https://cisofy.com/controls/HRDN-7230/

  Follow-up
  ----------------------------
  - Show details of a test (lynis show details TEST-ID)
  - Check the logfile for all details (less /var/log/lynis.log)
  - Read security controls texts (https://cisofy.com)
  - Use --upload to upload data to central system (Lynis Enterprise users)

================================================================================

  Lynis security scan details

  Hardening index : 64 [############        ]
  Tests performed : 255
  Plugins enabled : 2

  Components
  - Firewall               [V]
  - Malware scanner        [X]

  Lynis Modules
  - Compliance Status      [?]
  - Security Audit         [V]
  - Vulnerability Scan     [V]

  Files
  - Test and debug information      : /var/log/lynis.log
  - Report data                     : /var/log/lynis-report.dat

================================================================================

  Lynis 2.5.5

  Auditing, system hardening, and compliance for UNIX-based systems
  (Linux, macOS, BSD, and others)

  2007-2017, CISOfy - https://cisofy.com/lynis/
  Enterprise support available (compliance, plugins, interface and tools)

================================================================================

  [TIP] Enhance Lynis audits by adding your settings to custom.prf (see /linis/lynis/default.prf for all settings)

Installing a Malware Scanner

Install ClamAV

sudo apt-get install clamav

Download virus and malware definitions (this takes about 30 min)

sudo freshclam

Output:

sudo freshclam
> ClamAV Update process started at Wed Nov 15th 20:44:55 2017
> Downloading main.cvd [10%]

I had an issue on some boxes with clamav reporting I could not run freshclam

sudo freshclam
ERROR: /var/log/clamav/freshclam.log is locked by another process
ERROR: Problem with internal logger (UpdateLogFile = /var/log/clamav/freshclam.log).

This was fixed by typing

rm -rf /var/log/clamav/freshclam.log
sudo freshclam

Troubleshooting clamav

Clam AV does not like low ram boxes and may produce this error

Downloading main.cvd [100%]
ERROR: Database load killed by signal 9
ERROR: Failed to load new database

It looks like the solution is to increase your total ram.

fyi: Scan with ClamAV

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Re-running Lynis gave me the following malware status

- Malware scanner        [V]

Lynis Security rating

Hardening index : 69 [##############      ]

Installed

sudo apt-get install apt-show-versions
sudo apt-get install arpwatch
sudo apt-get install arpon

After re-running the test I got this Lynis security rating score (an improvement of 1)

Hardening index : 70 [#############       ]

Installed and configured debsums and auditd

sudo apt-get install debsums
sudo apt-get install audit

Now I get the following Lynis security rating score.

Hardening index : 71 [##############      ]

Conclusion

Lynis is great at performing an audit and recommending areas of work to allow you to harden your system (brute force protection, firewall, etc)

Security Don’ts

  • Never think you are done securing a system.

Security Do’s

  • Update Software (and remove software you do not use.)
  • Check Lynis Suggestions and try and resolve.
  • Security is an ongoing process, Do install a firewall, do ban bad IP’s, Do whitelist good IP’s, Do review Logs,
  • Do limit port access, make backups and keep on securing.

I will keep on securing and try and get remove all issues.

Read my past post on Securing Ubuntu in the cloud.

Scheduling an auto system updates is not enough in Ubuntu (as it is not recommended as the administrator should make decisions, not a scheduled job).

apt-get update
apt-get upgrade

fyi: CISOFY/Lynis do have paid subscriptions to have external scans of your servers: https://cisofy.com/pricing. (why upgrade?)

Lynis Plans

I will look into this feature soon.

Updating Lynis

I checked the official documentation and ran an update check

./lynis --check-update
This option is deprecated
Use: lynis update info

./lynis update info

 == Lynis ==

  Version            : 2.5.5
  Status             : Outdated
  Installed version  : 255
  Latest version     : 257
  Release date       : 2017-09-07
  Update location    : https://cisofy.com/lynis/


2007-2017, CISOfy - https://cisofy.com/lynis/

Not sure how to update?

./lynis update
Error: Need a target for update

Examples:
lynis update check
lynis update info

./lynis update check
status=outdated

I opened an issue about updating v2.5.5 here. I asked Twiter for help.

Twitter

Official Response: https://packages.cisofy.com/community/#debian-ubuntu

Git Response

Waiting..

I ended up deleting Lynis 2.5.5

ls -al
rm -R *
rm -rf *
rm -rf .git
rm -rf .gitignore
rm -rf .travis.yml
cd ..
rm -R lynis/
ls -al

Updated

./lynis update check
status=up-to-date

And reinstalled to v2.5.8

sudo git clone https://www.github.com/CISOfy/lynis

Output:

sudo git clone https://www.github.com/CISOfy/lynis
Cloning into 'lynis'...
remote: Counting objects: 8538, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 8538 (delta 0), reused 0 (delta 0), pack-reused 8534
Receiving objects: 100% (8538/8538), 3.96 MiB | 2.01 MiB/s, done.
Resolving deltas: 100% (6265/6265), done.
Checking connectivity... done.

More actions post upgrade to 2.5.8

  • Added a legal notice to “/etc/issues”, “/etc/issues.net” file’s.

Installing Lynis via apt-get instead of git clone

The official steps can be located here: https://packages.cisofy.com/community/#debian-ubuntu

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C80E383C3DE9F082E01391A0366C67DE91CA5D5F
apt install apt-transport-https
echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/99disable-translations
echo "deb https://packages.cisofy.com/community/lynis/deb/xenial main" > /etc/apt/sources.list.d/cisofy-lynis.list
apt update
apt install lynis
lynis show version

Unfortunately, I had an error with “apt update”

Error:

E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.

Complete install output

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C80E383C3DE9F082E01391A0366C67DE91CA5D5F
Executing: /tmp/tmp.Dz9g9nKV6i/gpg.1.sh --keyserver
keyserver.ubuntu.com
--recv-keys
C80E383C3DE9F082E01391A0366C67DE91CA5D5F
gpg: requesting key 91CA5D5F from hkp server keyserver.ubuntu.com
gpg: key 91CA5D5F: public key "CISOfy Software (signed software packages) <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

# apt install apt-transport-https
Reading package lists... Done
Building dependency tree
Reading state information... Done
apt-transport-https is already the newest version (1.2.24).
The following packages were automatically installed and are no longer required:
  gamin libfile-copy-recursive-perl libgamin0 libglade2-0 libpango1.0-0 libpangox-1.0-0 openbsd-inetd pure-ftpd-common update-inetd
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 8 not upgraded.

# echo 'Acquire::Languages "none";' > /etc/apt/apt.conf.d/99disable-translations

# echo "deb https://packages.cisofy.com/community/lynis/deb/ xenial main" > /etc/apt/sources.list.d/cisofy-lynis.list

# apt update
E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.

I reopened Github issue 491. A quick reply revealed that I did not put a space before “xenial” (oops)

fyi: I removed the dead keystore from apt by typing…

apt-key list
apt-key del 91CA5D5F
rm -rf /etc/apt/sources.list.d/cisofy-lynis.list

I can now install and update other packages with apt and not have the following error

E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.
E: Malformed entry 1 in list file /etc/apt/sources.list.d/cisofy-lynis.list (Component)
E: The list of sources could not be read.

I will remove the git clone and re-run the apt version later and put in more steps to get to a High 90’s Lynis score.

More

Read the official documentation https://cisofy.com/documentation/lynis/

Next: This guide will investigate the enterprise version of https://cisofy.com/pricing/ soon.

Hope this helps. If I have missed something please let me know on Twitter at @FearbySoftware

Donate and make this blog better



Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.46 Git hub response.

Filed Under: Advice, Cloud, Computer, Firewall, OS, Security, Server, Software, ssl, Ubuntu, VM, Vultr Tagged With: Audit, Lynis, secure, security, ubuntu

How to backup an Ubuntu VM in the cloud via crontab entries that trigger Bash Scripts, SSH, rsync and email backup alerts

August 20, 2017 by Simon

Here is how I backup a number of Ubuntu servers with crontab entries, bash scripts and rsync and send backup email.

Read more on useful terminal commands here for as low as $2.5 a month. Read on setting up a Digital Ocean Ubuntu server here for as low as $5 a month here ($10 free credit). Read more on setting up an AWS Ubuntu server here.

I have  6 numbered scripts in my scripts folder that handle backups, I call these scripts at set times via the crontab list.

fyi: Paths below have been changed for the purpose of this post (security).

1 1 * * * /bin/bash /scripts-folder/0.backupfiles.sh >> /backup-folder/0.backupfiles.log
3 1 * * * /bin/bash /scripts-folder/1.backupdbs.sh >> /backup-folder/1.backupdbs.log
5 1 * * * /bin/bash /scripts-folder/2.shrinkmysql.sh >> /backup-folder/2.shrinkmysql.log
10 1 * * * /bin/bash /scripts-folder/3.addtobackuplog.sh >> /backup-folder/3.addtobackuplog.log
11 1 * * * /bin/bash /scripts-folder/4.syncfiles.sh >> /backup-folder/4.syncfiles.log
15 1 * * * /bin/bash /scripts-folder/5.sendbackupemail.sh > /dev/null 2>&1

https://crontab.guru/ is great for specifying times to run jobs on each server (I backup one server at 1 AM,, another at 2 AM etc (never at the same time))

Bring up your crontab list

crontab -e

Check out the Crontab schedule generator here.

Below is the contents of my /scripts/0.backupfiles.sh (sensitive information removed).

I use this script to backup folders and configuration data

cat /scripts-folder/0.backupfiles.sh
#!/bin/bash

echo "Deleting old NGINX config..";
rm /backup-folder/config-nginx.zip

echo "Backing Up NGNIX..";
zip -r -9 /backup-folder/config-nginx.zip /etc/nginx/ -x "*.tmp" -x "*.temp" -x"./backup-folder/*.bak" -x "./backup-folder/*.zip"

echo "Deleting old www backup(s) ..";
#rm /backup-folder/www.zip
echo "Removing old www backup folder";
rm -R /backup-folder/www
echo "Making new backup folder at /backup-folder/www/";
mkdir /backup-folder/www

echo "Copying /www/ to /backup-folder/www/";
cp -rTv /www/ /backup-folder/www/
echo "Done copying /www/ to /backup-folder/www/";

Below is the contents of my /scripts-folder/1.backupdbs.sh (sensitive information removed).

I use this script to dump my MySQL database.

cat /scripts-folder/1.backupdbs.sh
#!/bin/bash

echo "$(date) 1.backupdbs.sh ...." >> /backup-folder/backup.log

echo "Removing old SQL backup..":
rm /backup-folder/mysql/database-dump.sql

echo "Backing up SQL";
/usr/bin/mysqldump --all-databases > /backup-folder/mysql/database-dump.sql -u 'mysqluser' -p'[email protected]$word'

echo "Done backing up the database";

Below is the contents of my /scripts-folder/2.shrinkmysql.sh (sensitive information removed).

I use this script to tar my SQL dumps as these files can be quite big

cat /scripts-folder/2.shrinkmysql.sh
#!/bin/bash

echo "$(date) 2.shrinkmysql.sh ...." >> /backup-folder/backup.log

echo "Backing up MySQL dump..";
tar -zcf /backup-folder/mysql.tgz /backup-folder/mysql/

echo "Removing old MySQL dump..";
rm /backup-folder/mysql/*.sql

Below is the contents of my /scripts-folder/3.addtobackuplog.sh (sensitive information removed).

This script is handy for dumping extra information.

cat /scripts-folder/3.addtobackuplog.sh
#!/bin/bash

echo "$(date) 3.addtobackuplog.sh ...." >> /backup-folder/backup.log

echo "Server Name.." >> /backup-folder/backup.log
grep "server_name" /etc/nginx/sites-available/default

echo "$(date) Timec" >> /backup-folder/backup.log
sudo hwclock --show  >> /backup-folder/backup.log

echo "$(date) Uptime, Load etc" >> /backup-folder/backup.log
w -i >> /backup-folder/backup.log

echo "$(date) Memory" >> /backup-folder/backup.log
free  >> /backup-folder/backup.log

echo "$(date) Disk Space" >> /backup-folder/backup.log
pydf >> /backup-folder/backup.log

echo "Firewall" >> /backup-folder/backup.log
ufw status >> /backup-folder/backup.log

echo "Adding to Backup Log file..";
echo "$(date) Nightly MySQL Backup Successful....." >> /backup-folder/backup.log

Below is the contents of my /scripts-folder/4.syncfiles.sh (sensitive information removed).

This script is the workhorse routine that rsyncs files to the source to the backup server (a dedicated Vulr server with an A Name record attaching the server to my domain).

I installed sshpass to pass in the ssh user password (after ssh is connected (authorized_keys set), I tried to setup a rsync daemon but had no luck).  I ensured appropriate ports were opened on the source (OUT 22, 873) and backup server (IN 22 873).

cat /scripts-folder/4.syncfiles.sh
#!/bin/bash

echo "$(date) 4.syncfiles.sh ...." >> /backup-folder/backup.log
echo "Syncing Files.";

sudo sshpass -p 'Y0urW0rkingSSHR00tPa$0ord' rsync -a -e  'ssh -p 22 ' --progress -P /backup-folder backup-server.yourdomain.com:/backup-folder/1.www.server01.com/

ufw firewall has great rules for allowing certain IP’s to talk on ports.

Set Outbound firewall rules (to certain IP’s)

sudo ufw allow from 123.123.123.123 to any port 22

Change 123.123.123.123 to your backup server.

Set Inbound firewall rules (to certain IP’s)

sudo ufw allow out from 123.123.123.123 to any port 22

Change 123.123.123.123 to your sending server.

You can and should setup rate limits on IP’s hitting certain ports.

udo ufw limit 22 comment 'Rate limit for this port has been reached'

Install Fail2Ban to automatically ban certain users. Fail2Ban reads log file that contains password failure report
and bans the corresponding IP addresses using firewall rules.  Read more on securing Ubuntu in the cloud here.

Below is the contents of my /scripts-folder/5.sendbackupemail.sh (sensitive information removed).

This script sends an email and attaches a zip file of all log files generated through the backup process.

cat /scripts/5.sendbackupemail.sh
#!/bin/bash

echo "$(date) 5.sendbackupemail.sh ...." >> /backup-folder/backup.log

echo "Zipping up log Files.";

zip -r -9 /backup-folder/backup-log.zip /backup-folder/*.log

echo "Sending Email";
sendemail -f [email protected] -t [email protected] -u "Backup Alert" -m "server01 has been backed up" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp Y0urGSu1tePasswordG0e$Here123 -a /backup-folder/backup-log.zip

Read my guide on setting up sendmail here.

Security Considerations

You should never store passwords in scripts that talk to SSH connections, create MySQL dumps or when talking to email servers, I will update this guide when I solving all of these cases.  Also, create the least access required for user accounts where possible.

Target Server Configuration

Alos you can see in /scripts-folder/4.syncfiles.sh that I am saving to the ‘/backup-folder/1.www.server01.com/’ folder, you can make as many folders as you want to make the most of the backup server.  I would advise you not use the server for anything else like web servers and apps as this server is holding important stuff.

backup-server.yourdomain.com:/backup-folder/1.www.server01.com/

I have a handy script to delete all backups (handy during testing).

#!/bin/bash

echo "Deleting Backup Folders..........................................";

echo " Deleting /backup-folder/1.www.server01.com";
rm -R /backup-folder/1.www.server01.com

echo " Deleting /backup-folder/2.www.server02.com";
rm -R /backup-folder/2.www.server02.com

echo " Deleting /backup-folder/3.www.server03.com";
rm -R /backup-folder/3.www.server03.com

echo " Deleting /backup-folder/4.www.server04.com";
rm -R /backup-folder/4.www.server04.com

echo " Deleting /backup-folder/5.www.server05.com";
rm -R /backup-folder/5.www.server05.com

echo " Deleting /backup-folder/6.www.server06.com";
rm -R /backup-folder/6.www.server06.com

echo " Deleting /backup-folder/7.www.server07.com";
rm -R /backup-folder/7.www.server07.com

echo " Deleting /backup-folder/8.www.server08.com";
rm -R /backup-folder/8.www.server08.com

echo "
";

echo "Creating Backup Folders.........................................";

echo " Making folder /backup-folder/1.www.server01.com";
mkdir /backup-folder/1.www.server01.com

echo " Making folder /backup-folder/2.www.server02.com";
mkdir /backup-folder/2.www.server02.com

echo " Making folder /backup-folder/3.www.server03.com";
mkdir /backup-folder/3.www.server03.com";

echo " Making folder /backup-folder/4.www.server04.com";
mkdir /backup-folder/4.www.server04.com

echo " Making folder /backup-folder/5.www.server04.com";
mkdir /backup-folder/5.www.server04.com

echo " Making folder /backup-folder/6.www.server05.com";
mkdir /backup-folder/6.www.server04.com

echo " Making folder /backup-folder/7.www.server06.com";
mkdir /backup-folder/7.www.server04.com

echo " Making folder /backup-folder/8.www.server07.com";
mkdir /backup-folder/8.www.server08.com

echo "
";

echo "Backup Folder Contents.........................................";
ls /backup-folder -al
echo "
";

echo "Folder Strcuture...............................................";
cd /backup-folder
pwd
tree -a -f -p -h  -l -R

echo "
";

echo "How big is the backup folder...................................";
du -hs /backup-folder

echo "
";

echo "Done...........................................................";

Ensure your backup server is just for backups and only allows traffic from known IP’s

ufw status
Status: active

To                         Action      From
--                         ------      ----
22                         ALLOW       123.123.123.123
22                         ALLOW       123.123.123.124
22                         ALLOW       123.123.123.125
22                         ALLOW       123.123.123.126
22                         ALLOW       123.123.123.127
22                         ALLOW       123.123.123.128
22                         ALLOW       123.123.123.129
22                         ALLOW       123.123.123.130
53                         ALLOW       Anywhere

22                         ALLOW OUT   123.123.123.123
22                         ALLOW OUT   123.123.123.124
22                         ALLOW OUT   123.123.123.125
22                         ALLOW OUT   123.123.123.126
22                         ALLOW OUT   123.123.123.127
22                         ALLOW OUT   123.123.123.128
22                         ALLOW OUT   123.123.123.129
22                         ALLOW OUT   123.123.123.130

Change the 123.x.x.x servers to your servers IP’s

Tip: Keep an eye on the backups with tools like ncdu

sudo ncdu /backup-folder
ncdu 1.11 ~ Use the arrow keys to navigate, press ? for help
--- /backup ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
    1.0 GiB [##########] /6.www.server01.com
  462.1 MiB [####      ] /1.www.server02.com
  450.1 MiB [####      ] /5.www.server03.com
   60.1 MiB [          ] /2.www.server04.com
  276.0 KiB [          ] /3.www.server05.com
  276.0 KiB [          ] /4.www.server06.com
e   4.0 KiB [          ] /8.www.server07.com
e   4.0 KiB [          ] /7.www.server08.com

Installing SSH on OSX

If you want to backup to this server with OSyouou will need to install sshpass

curl -O -L http://downloads.sourceforge.net/project/sshpass/sshpass/1.06/sshpass-1.06.tar.gz && tar xvzf sshpass-1.06.tar.gz
cd sshpass-1.06
./configure
sudo make install

sshpass should be installed

sshpass -V
sshpass 1.06
(C) 2006-2011 Lingnu Open Source Consulting Ltd.
(C) 2015-2016 Shachar Shemesh
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.

Using "assword" as the default password prompt indicator.

I have not got sshp[ass working yet error “Host key verification failed.”  I had to remove the back known host from ~/.ssh/known_hosts” on OSX

But this worked on OSX

rsync -a -e 'ssh -p 22 ' --progress -P ~/Desktop [email protected]:/backup/8.Mac/

Note: Enter the servers [email protected] before the hostname or rsync will use the logged in OSX username

Don’t forget to check the backup serves disk usage often.

disk usage screenshot

Output from backing up an incremental update (1x new folder)

localhost:~ local-account$ rsync -a -e  'ssh -p 22 ' --progress -P /Users/local-account/folder-to-backup [email protected]:/backup/the-computer/
[email protected]'s password: 
building file list ... 
51354 files to consider
folder-to-backup/
folder-to-backup/TestProject/
folder-to-backup/TestProject/.git/
folder-to-backup/TestProject/.git/COMMIT_EDITMSG
          15 100%    0.00kB/s    0:00:00 (xfer#1, to-check=16600/51354)
folder-to-backup/TestProject/.git/HEAD
          23 100%   22.46kB/s    0:00:00 (xfer#2, to-check=16599/51354)
folder-to-backup/TestProject/.git/config
         137 100%  133.79kB/s    0:00:00 (xfer#3, to-check=16598/51354)
folder-to-backup/TestProject/.git/description
          73 100%   10.18kB/s    0:00:00 (xfer#4, to-check=16597/51354)
folder-to-backup/TestProject/.git/index
        1581 100%  220.56kB/s    0:00:00 (xfer#5, to-check=16596/51354)
folder-to-backup/TestProject/.git/hooks/
folder-to-backup/TestProject/.git/hooks/README.sample
         177 100%   21.61kB/s    0:00:00 (xfer#6, to-check=16594/51354)
folder-to-backup/TestProject/.git/info/
folder-to-backup/TestProject/.git/info/exclude
          40 100%    4.88kB/s    0:00:00 (xfer#7, to-check=16592/51354)
folder-to-backup/TestProject/.git/logs/
folder-to-backup/TestProject/.git/logs/HEAD
         164 100%   20.02kB/s    0:00:00 (xfer#8, to-check=16590/51354)
folder-to-backup/TestProject/.git/logs/refs/
folder-to-backup/TestProject/.git/logs/refs/heads/
folder-to-backup/TestProject/.git/logs/refs/heads/master
         164 100%   20.02kB/s    0:00:00 (xfer#9, to-check=16587/51354)
folder-to-backup/TestProject/.git/objects/
folder-to-backup/TestProject/.git/objects/05/
folder-to-backup/TestProject/.git/objects/05/0853a802dd40cad0e15afa19516e9ad94f5801
        2714 100%  294.49kB/s    0:00:00 (xfer#10, to-check=16584/51354)
folder-to-backup/TestProject/.git/objects/11/
folder-to-backup/TestProject/.git/objects/11/729e81fc116908809fc17d60c8604aa43ec095
         105 100%   11.39kB/s    0:00:00 (xfer#11, to-check=16582/51354)
folder-to-backup/TestProject/.git/objects/23/
folder-to-backup/TestProject/.git/objects/23/768a20baaf8aa0c31b0e485612a5e245bb570d
         131 100%   12.79kB/s    0:00:00 (xfer#12, to-check=16580/51354)
folder-to-backup/TestProject/.git/objects/27/
folder-to-backup/TestProject/.git/objects/27/3375fc70381bd2608e05c03e00ee09c42bdc58
         783 100%   76.46kB/s    0:00:00 (xfer#13, to-check=16578/51354)
folder-to-backup/TestProject/.git/objects/2a/
folder-to-backup/TestProject/.git/objects/2a/507ef5ea3b1d68c2d92bb4aece950ef601543e
         303 100%   26.90kB/s    0:00:00 (xfer#14, to-check=16576/51354)
folder-to-backup/TestProject/.git/objects/2b/
folder-to-backup/TestProject/.git/objects/2b/f8bd93d56787a7548c7f8960a94f05c269b486
         136 100%   12.07kB/s    0:00:00 (xfer#15, to-check=16574/51354)
folder-to-backup/TestProject/.git/objects/2f/
folder-to-backup/TestProject/.git/objects/2f/900764e9d12d8da7e5e01ba34d2b7b2d95ffd4
         209 100%   17.01kB/s    0:00:00 (xfer#16, to-check=16572/51354)
folder-to-backup/TestProject/.git/objects/36/
folder-to-backup/TestProject/.git/objects/36/d2c80d8893178d7e1f2964085b273959bfdc28
         201 100%   16.36kB/s    0:00:00 (xfer#17, to-check=16570/51354)
folder-to-backup/TestProject/.git/objects/3d/
folder-to-backup/TestProject/.git/objects/3d/e5a02083dbe9c23731a38901dca9e913c04dd0
         130 100%   10.58kB/s    0:00:00 (xfer#18, to-check=16568/51354)
folder-to-backup/TestProject/.git/objects/40/
folder-to-backup/TestProject/.git/objects/40/40592d8d4d886a5c81e1369ddcde71dd3b66b5
         841 100%   63.18kB/s    0:00:00 (xfer#19, to-check=16566/51354)
folder-to-backup/TestProject/.git/objects/87/
folder-to-backup/TestProject/.git/objects/87/60f48ddbc9ed0863e3fdcfce5e4536d08f9b8d
          86 100%    6.46kB/s    0:00:00 (xfer#20, to-check=16564/51354)
folder-to-backup/TestProject/.git/objects/a9/
folder-to-backup/TestProject/.git/objects/a9/e6a23fa34a5de4cd36250dc0d797439d85f2ea
         306 100%   22.99kB/s    0:00:00 (xfer#21, to-check=16562/51354)
folder-to-backup/TestProject/.git/objects/b0/
folder-to-backup/TestProject/.git/objects/b0/4364089fdc64fe3b81bcd41462dd55edb7a001
          57 100%    4.28kB/s    0:00:00 (xfer#22, to-check=16560/51354)
folder-to-backup/TestProject/.git/objects/be/
folder-to-backup/TestProject/.git/objects/be/3b93d6d8896d69670f1a8e26d1f51f9743d07e
          60 100%    4.19kB/s    0:00:00 (xfer#23, to-check=16558/51354)
folder-to-backup/TestProject/.git/objects/d0/
folder-to-backup/TestProject/.git/objects/d0/524738680109d9f0ca001dad7c9bbf563e898e
         523 100%   36.48kB/s    0:00:00 (xfer#24, to-check=16556/51354)
folder-to-backup/TestProject/.git/objects/d5/
folder-to-backup/TestProject/.git/objects/d5/4e024fe16b73e5602934ef83e0b32a16243a5e
          69 100%    4.49kB/s    0:00:00 (xfer#25, to-check=16554/51354)
folder-to-backup/TestProject/.git/objects/db/
folder-to-backup/TestProject/.git/objects/db/3f0ce163c8033a175d27de6a4e96aadc115625
          59 100%    3.84kB/s    0:00:00 (xfer#26, to-check=16552/51354)
folder-to-backup/TestProject/.git/objects/df/
folder-to-backup/TestProject/.git/objects/df/cad4828b338206f0a7f18732c086c4ef959a7b
          51 100%    3.32kB/s    0:00:00 (xfer#27, to-check=16550/51354)
folder-to-backup/TestProject/.git/objects/ef/
folder-to-backup/TestProject/.git/objects/ef/e6d036f817624654f77c4a91ae6f20b5ecbe9d
          94 100%    5.74kB/s    0:00:00 (xfer#28, to-check=16548/51354)
folder-to-backup/TestProject/.git/objects/f2/
folder-to-backup/TestProject/.git/objects/f2/b43571ec42bad7ac43f19cf851045b04b6eb29
         936 100%   57.13kB/s    0:00:00 (xfer#29, to-check=16546/51354)
folder-to-backup/TestProject/.git/objects/fd/
folder-to-backup/TestProject/.git/objects/fd/f3f97d1b6e9d8d29bb69a88c4d89ca752bd937
         807 100%   49.26kB/s    0:00:00 (xfer#30, to-check=16544/51354)
folder-to-backup/TestProject/.git/objects/info/
folder-to-backup/TestProject/.git/objects/pack/
folder-to-backup/TestProject/.git/refs/
folder-to-backup/TestProject/.git/refs/heads/
folder-to-backup/TestProject/.git/refs/heads/master
          41 100%    2.50kB/s    0:00:00 (xfer#31, to-check=16539/51354)
folder-to-backup/TestProject/.git/refs/tags/
folder-to-backup/TestProject/TestProject.xcodeproj/
folder-to-backup/TestProject/TestProject.xcodeproj/project.pbxproj
       11476 100%  659.24kB/s    0:00:00 (xfer#32, to-check=16536/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/contents.xcworkspacedata
         156 100%    8.96kB/s    0:00:00 (xfer#33, to-check=16534/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/xcuserdata/
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/xcuserdata/simon.xcuserdatad/
folder-to-backup/TestProject/TestProject.xcodeproj/project.xcworkspace/xcuserdata/simon.xcuserdatad/UserInterfaceState.xcuserstate
        8190 100%  470.47kB/s    0:00:00 (xfer#34, to-check=16531/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/xcschemes/
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/xcschemes/TestProject.xcscheme
        3351 100%  192.50kB/s    0:00:00 (xfer#35, to-check=16527/51354)
folder-to-backup/TestProject/TestProject.xcodeproj/xcuserdata/simon.xcuserdatad/xcschemes/xcschememanagement.plist
         483 100%   27.75kB/s    0:00:00 (xfer#36, to-check=16526/51354)
folder-to-backup/TestProject/TestProject/
folder-to-backup/TestProject/TestProject/AppDelegate.swift
        2172 100%  117.84kB/s    0:00:00 (xfer#37, to-check=16524/51354)
folder-to-backup/TestProject/TestProject/Info.plist
        1442 100%   78.23kB/s    0:00:00 (xfer#38, to-check=16523/51354)
folder-to-backup/TestProject/TestProject/ViewController.swift
         505 100%   27.40kB/s    0:00:00 (xfer#39, to-check=16522/51354)
folder-to-backup/TestProject/TestProject/Assets.xcassets/
folder-to-backup/TestProject/TestProject/Assets.xcassets/AppIcon.appiconset/
folder-to-backup/TestProject/TestProject/Assets.xcassets/AppIcon.appiconset/Contents.json
        1077 100%   58.43kB/s    0:00:00 (xfer#40, to-check=16519/51354)
folder-to-backup/TestProject/TestProject/Base.lproj/
folder-to-backup/TestProject/TestProject/Base.lproj/LaunchScreen.storyboard
        1740 100%   94.40kB/s    0:00:00 (xfer#41, to-check=16517/51354)
folder-to-backup/TestProject/TestProject/Base.lproj/Main.storyboard
        1695 100%   91.96kB/s    0:00:00 (xfer#42, to-check=16516/51354)

sent 1243970 bytes  received 1220 bytes  75466.06 bytes/sec
total size is 10693902652  speedup is 8588.17

Update with no files to upload

localhost:~ local-account$ rsync -a -e  'ssh -p 22 ' --progress -P /Users/local-account/folder-to-backup [email protected]:/backup/the-computer/
[email protected]'s password: 
building file list ... 
51354 files to consider

sent 1198459 bytes  received 20 bytes  82653.72 bytes/sec
total size is 10693902652  speedup is 8922.90

Backup is easy..

rsync -a -e  'ssh -p 22 ' --progress -P /Users/local-account/folder-to-backup [email protected]:/backup/the-computer/

If you want incremental and full backups try Duplicity.

Hope this helps.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 Duplicity

Filed Under: Advice, AWS, Backup, Cloud, Development, Digital Ocean, Domain, Firewall, MySQL, Networking, Security, Share, Transfer, Ubuntu, VM, Vultr Tagged With: Backup, bash script, rsync, send email, server

How to send email via G Suite from Ubuntu in the cloud

August 20, 2017 by Simon

Here is how I send emails from the command line in Ubuntu servers in the cloud via G SUote connect emails

Jan 2018 Update

Post on adding a second domain to G Suite

Post on adding email aliases to G Suite

Main

If you use ufw for your Ubuntu firewall then allow port 587 out traffic (read more in securing Ubuntu in the cloud here).

sudo ufw allow out 587

Ensure your port is open on IPV4 and IPV6.

sudo ufw status

If you have a GUI managed firewall with your server host then configure it to allow port 587 (out).

Read more on useful terminal commands here or setting up a Digital Ocean Ubuntu server here for as low as $5 a month here, Vultr server for as low as $2.5 here ($10 free credit). Read more about setting up an AWS Ubuntu server here.

Install sendmail and other pre requisites

apt-get install libio-socket-ssl-perl libnet-ssleay-perl sendemail

Now you are ready to send an email with 1 line in the terminal (use your Gmail email unless you have diverted your email to G Suite (then use that email, guide here). Create a G Suite account here.

FYI: I have setup my email for my domain to redirect via G Suite (see my guide here and older guide here)

Send an email from the command line

sendemail -f [email protected] -t [email protected] -u "test email" -m "test message" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp [email protected][email protected]

This is not a drop-in replacement for Outlook or Thunderbird email clients but it is perfect for command-line alerts to con-jobs or start-up notifications.

Sending an email with an attachment

sendemail -f [email protected] -t [email protected] -u "test email" -m "test message" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp [email protected]&[email protected] -a /folder/file.zip

Coming soon: A guide on backing up Ubuntu with Rsync etc.

More

Post on adding a second domain to G Suite

Post on adding email aliases to G Suite

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.2 added other links

v1.1 added links to two G Suite guides.

Filed Under: AWS, Cloud, Digital Ocean, Server, Terminal, Ubuntu, VM, Vultr Tagged With: command line, email, senemail, sent

Deploying WordPress to a Vultr VM via command line

August 20, 2017 by Simon

Here is my guide on setting up WordPress on an Ubuntu server via the command line. Here is my recent guide on the wp-cli tool.

Read my guide on setting up a Vultr VM and installing NGINX web server and MySQL database. Use this link to create a Vultr account.  This guide assumes you have a working Ubuntu VM with NGINX web server, MySQL, and SSL.

Consider setting up an SSL certificate (read my guide here on setting up a  free SL certificate with Let’s Encrypt). Once again read my guide on Setting up a Vultr server. Also moving WordPress from CPanel to a self-managed server and securing Ubuntu in the cloud. Ensure you are backing up your server (read my guide on How to backup an Ubuntu VM in the cloud via crontab entries that trigger Bash Scripts, SSH, Rsync and email backup alerts).

Ensure MySQL is setup.

mysql --version
mysql  Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using  EditLine wrapper

Ensure your server is setup, firewall enabled (port 80 and 443 enabled), NGINX is installed and working.

Check NGINX version

sudo nginx -v
nginx version: nginx/1.13.3

Check NGINX Status

service nginx status
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-08-18 00:35:13 AEST; 3 days ago
 Main PID: 1276 (nginx)
    Tasks: 3
   Memory: 6.4M
      CPU: 3.218s
   CGroup: /system.slice/nginx.service
           ├─1276 nginx: master process /usr/sbin/nginx -g daemon on; master_process on
           ├─1277 nginx: worker process
           └─1278 nginx: cache manager process

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Check your PHP install status to confirm your setup, put this in a new PHP file (e.g /p.php) and load it to view PHP configuration and to verify PHP setup.

<?php
phpinfo()
?>

Loading PHP Configuration

First I edited NGINX configuration to allow WordPress to work.

location / {
        try_files $uri $uri/ /index.php?q=$uri&$args;
        index index.php index.html index.htm;
        proxy_set_header Proxy "";
}

Mostly I added this line.

try_files $uri $uri/ /index.php?q=$uri&$args;

I restated NGINX and PHP

nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart

If this config change is not made WordPress will not install or run.

Database

In order to setup WordPress, we need to create a MySQL database/database user before downloading WordPress from the command line.

From an ssh terminal type (and log in with your MySQL root password)

mysql -p
password:

Create a database (choose a database name, add random text).

mysql> create database databasemena123;
Query OK, 1 row affected (0.00 sec)

Create a user and assign them to the blog (choose a username, add random text)

grant all privileges on databasname123.* to 'blogusername123'@'localhost' identified by "siple-password";
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

If your password is simple you will get this warning.

ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

A 50+ char password with 10 digits and 10 numbers should be ok

mysql> grant all privileges on databasname123.* to 'blogusername123'@'localhost' identified by "xxxxxxxxxxxxxxxxxxxxxxxxremovedxxxxxxxxxxxxxxxxxxxxxxxxxxxxx";
Query OK, 0 rows affected, 1 warning (0.00 sec)

You can now apply the permissions and clear the permissions cache.

mysql> flush privileges;
Query OK, 0 rows affected (0.01 sec)
exit;
Bye

Go to your /www folder on your server and run this command to download WordPress.

sudo curl -o wordpress.zip https://wordpress.org/latest.zip

% Total % Received % Xferd Average Speed Time Time Time Current
 Dload Upload Total Spent Left Speed
100 8701k 100 8701k 0 0 6710k 0 0:00:01 0:00:01 --:--:-- 6709k

You can now move any existing temporary index files in your /www folder

mv index.html oldindex.html
mv index.php oldindex.php
p.php oldp.php

ls -al
total 8724
drwxr-xr-x  2 root root    4096 Aug 21 11:17 .
drwxr-xr-x 27 root root    4096 Aug 13 22:27 ..
-rw-r--r--  1 root root      37 Jul 31 11:51 oldindex.html
-rw-r--r--  1 root root      37 Jul 31 11:51 oldindex.php
-rw-r--r--  1 root root      19 Aug 21 11:04 oldp.php
-rw-r--r--  1 root root 8910664 Aug 21 11:16 wordpress.zip

Now I can extract wordpress.zip

First, you need to install unzip

sudo apt-get install unzip

Now Unzip wordpress.zip

unzip wordpress.zip

At this point, I decided to remove all old index files on my website

rm -R /www/old*.*

The unzipped contents are in a sub folder called “wordpress”, we need to move the WordPress contents up a folder.

ls /www/ -al
total 8716
drwxr-xr-x  3 root root    4096 Aug 21 13:22 .
drwxr-xr-x 27 root root    4096 Aug 13 22:27 ..
drwxr-xr-x  5 root root    4096 Aug  2 21:02 wordpress
-rw-r--r--  1 root root 8911367 Aug 21 11:22 wordpress.zip

“wordpress” folder contents.

ls /www/wordpress -al
total 196
drwxr-xr-x  5 root root  4096 Aug  2 21:02 .
drwxr-xr-x  3 root root  4096 Aug 21 13:22 ..
-rw-r--r--  1 root root   418 Sep 25  2013 index.php
-rw-r--r--  1 root root 19935 Jan  2  2017 license.txt
-rw-r--r--  1 root root  7413 Dec 12  2016 readme.html
-rw-r--r--  1 root root  5447 Sep 27  2016 wp-activate.php
drwxr-xr-x  9 root root  4096 Aug  2 21:02 wp-admin
-rw-r--r--  1 root root   364 Dec 19  2015 wp-blog-header.php
-rw-r--r--  1 root root  1627 Aug 29  2016 wp-comments-post.php
-rw-r--r--  1 root root  2853 Dec 16  2015 wp-config-sample.php
drwxr-xr-x  4 root root  4096 Aug  2 21:02 wp-content
-rw-r--r--  1 root root  3286 May 24  2015 wp-cron.php
drwxr-xr-x 18 root root 12288 Aug  2 21:02 wp-includes
-rw-r--r--  1 root root  2422 Nov 21  2016 wp-links-opml.php
-rw-r--r--  1 root root  3301 Oct 25  2016 wp-load.php
-rw-r--r--  1 root root 34327 May 12 17:12 wp-login.php
-rw-r--r--  1 root root  8048 Jan 11  2017 wp-mail.php
-rw-r--r--  1 root root 16200 Apr  6 18:01 wp-settings.php
-rw-r--r--  1 root root 29924 Jan 24  2017 wp-signup.php
-rw-r--r--  1 root root  4513 Oct 14  2016 wp-trackback.php
-rw-r--r--  1 root root  3065 Aug 31  2016 xmlrpc.php

Remove the wordpress.zip in /www/

rm -R /www/wordpress.zip

Move all files from the /www/wordpress/ up a folder to /www/.

sudo mv /www/wordpress/* /www/

Now we can create and upload folder

mkdir /www/wp-content/content/

Apply permissions (or you can never upload to WordPress).

chmod 755 /www/wp-content/uploads/

I think I need to apply permissions here (to allow plugins to upload/update)

chmod 755 /www/wp-content/

Edit the wp-config-sample.php

sudo nano /www/wp-config-sample.php

Add your database name to the WordPress config.

Before:

define('DB_NAME', 'database_name_here');

After:

define('DB_NAME', 'databasemena123');

Add your database username and password to the WordPress config.

Before:

/** MySQL database username */
define('DB_USER', 'username_here');

/** MySQL database password */
define('DB_PASSWORD', 'password_here');

After:

/** MySQL database username */
define('DB_USER', 'blogusername123');

/** MySQL database password */
define('DB_PASSWORD', 'xxxxxxxxxxxxxxxxxxxxxxxxremovedxxxxxxxxxxxxxxxxxxxxxxxxxxxxx');

Go to https://api.wordpress.org/secret-key/1.1/salt/ and copy the salts to your clipboard and replace this in your wp-config-sample.php

define('AUTH_KEY',         'put your unique phrase here');
define('SECURE_AUTH_KEY',  'put your unique phrase here');
define('LOGGED_IN_KEY',    'put your unique phrase here');
define('NONCE_KEY',        'put your unique phrase here');
define('AUTH_SALT',        'put your unique phrase here');
define('SECURE_AUTH_SALT', 'put your unique phrase here');
define('LOGGED_IN_SALT',   'put your unique phrase here');
define('NONCE_SALT',       'put your unique phrase here');

..with paste over whatever you generated (e.g)

define('AUTH_KEY',         '/[email protected];#Tr#6Tz6z^[LUdOvpNREUYT[|SmAN%%V% cyWk]-I%}E+t$#4c5n6vvp');
define('SECURE_AUTH_KEY',  'q_z-F-V#[[Lf<%_4,w#L_nyG|[email protected], YK0GR)R<Lk!.zqH< [email protected],vXmMzG');
define('LOGGED_IN_KEY',    'o}c^Vb$ fyh,J6v9PyF)mdt4(Q_J}`FNOJ9.ag^i+UAUS?lmzwGzp<tV7W(wbb#:');
define('NONCE_KEY',        '<y3&QvdAz;48ZFJBAdsRmC~ejXWiOw{dTWF_)p?^E%D&GdtK2LHGZ|.^rvRF-l$m');
define('AUTH_SALT',        ',e{|+H`i6}[email protected]`kvkF??^?IC&?6W~9SHkqSxvX~z,fR Xn:[email protected]_X^');
define('SECURE_AUTH_SALT', '|g2(y}8olAv_b]>|^jR|-.VU_E[P~PoWprwTKu-mM9-:NEc#2HikST~84ad-Ksyx');
define('LOGGED_IN_SALT',   'sd1:-|ai{<Ferj,|$2+ <ietEFT9 xEe89$[8%{[email protected]{FC(?[pF$oJ[[email protected]]');
define('NONCE_SALT',       '0D]kv-x.?_o^pwKtZI:g}~64vDb.Gdy1cBPQA{?;g(AE|0D)g:=1BrUbKF>T1oIv');

Now save changes to wp-config-sample.php

Rename the sample config file (to make it live)

sudo mv wp-config-sample.php wp-config.php

You can now load your website ( https://www.yourserver.com ) and finish the setup in the WordPress GUI.

Wordpress Setup GUI

WordPress should now be installed and you can log in.

Don’t forget to update your options – /wp-admin/options-general.php

I would recommend you review the options to prevent comment spam – /wp-admin/options-discussion.php

Also if you are using the twentyseventeen theme consider updating your header image (remove the pot plant) 0 /wp-admin/customize.php?theme=twentyseventeen&return=%2Fwp-admin%2Fthemes.php

Signup for a Vulr server here for as low as $2.5 a month or a Digital Ocean server ($10 free credit/2 months, signup for G Suite email on google here and read my guide here.

Read this guide on using the wp-cli tool to automate post-install.

I hope this helps.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.1 added wp-cli tool

Filed Under: Cloud, Server, Ubuntu, VM, Vultr, Wordpress Tagged With: comamnd line, instal, vm, wordpress

Securing Ubuntu in the cloud

August 9, 2017 by Simon

It is easy to deploy servers to the cloud within a few minutes, you can have a cloud-based server that you (or others can use). ubuntu has a great guide on setting up basic security issues but what do you need to do.

If you do not secure your server expects it to be hacked into. Below are tips on securing your cloud server.

First, read more on scanning your server with Lynis security scan.

Always use up to date software

Always use update software, malicious users can detect what software you use with sites like shodan.io (or use port scan tools) and then look for weaknesses from well-published lists (e.g WordPress, Windows, MySQL, node, LifeRay, Oracle etc). People can even use Google to search for login pages or sites with passwords in HTML (yes that simple).  Once a system is identified by a malicious user they can send automated bots to break into your site (trying millions of passwords a day) or use tools to bypass existing defences (Security researcher Troy Hunt found out it’s child’s play).

Portscan sites like https://mxtoolbox.com/SuperTool.aspx?action=scan are good for knowing what you have exposed.

You can also use local programs like nmap to view open ports

Instal nmap

sudo apt-get install nmap

Find open ports

nmap -v -sT localhost

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-08 23:57 AEST
Initiating Connect Scan at 23:57
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 9101/tcp on 127.0.0.1
Discovered open port 9102/tcp on 127.0.0.1
Discovered open port 9103/tcp on 127.0.0.1
Completed Connect Scan at 23:57, 0.05s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00020s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
3306/tcp open  mysql
9101/tcp open  jetdirect
9102/tcp open  jetdirect
9103/tcp open  jetdirect

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Limit ssh connections

Read more here.

Use ufw to set limits on login attempts

sudo ufw limit ssh comment 'Rate limit hit for openssh server'

Only allow known IP’s access to your valuable ports

sudo ufw allow from 123.123.123.123/32 to any port 22

Delete unwanted firewall rules

sudo ufw status numbered
sudo ufw delete 8

Only allow known IP’s to certain ports

sudo ufw allow from 123.123.123.123 to any port 80/tcp

Also, set outgoing traffic to known active servers and ports

sudo ufw allow out from 123.123.123.123 to any port 22

Don’t use weak/common Diffie-Hellman key for SSL certificates, more information here.

openssl req -new -newkey rsa:4096 -nodes -keyout server.key -out server.csr
 
Generating a 4096 bit RSA private key
...

More info on generating SSL certs here and setting here and setting up Public Key Pinning here.

Intrusion Prevention Software

Do run fail2ban: Guide here https://www.linode.com/docs/security/using-fail2ban-for-security

I use iThemes Security to secure my WordPress and block repeat failed logins from certain IP addresses.

iThemes Security can even lock down your WordPress.

You can set iThemes to auto lock out users on x failed logins

Remember to use allowed whitelists though (it is so easy to lock yourself out of servers).

Passwords

Do have strong passwords and change the root password provided by the hosts. https://howsecureismypassword.net/ is a good site to see how strong your password is from brute force password attempts. https://www.grc.com/passwords.htm is a good site to obtain a strong password.  Do follow Troy Hunt’s blog and twitter account to keep up to date with security issues.

Configure a Firewall Basics

You should install a firewall on your Ubuntu and configure it and also configure a firewall with your hosts (e.g AWS, Vultr, Digital Ocean).

Configure a Firewall on AWS

My AWS server setup guide here. AWS allow you to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

Configure a Firewall on Digital Ocean

Configuring a firewall on Digital Ocean (create a $5/m server here).  You can configure your Digital Ocean droplet firewall by clicking Droplet, Networking then Manage Firewall after logging into Digital Ocean.

Configure a Firewall on Vultr

Configuring a firewall on Vultr (create a $2.5/m server here).

Don’t forget to set IP rules for IPV4 and IPV6, Only set the post you need to allow and ensure applications have strong passwords.

Ubuntu has a firewall built in (documentation).

sudo ufw status

Enable the firewall

sudo ufw enable

Adding common ports

sudo ufw allow ssh/tcp
sudo ufw logging on
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 53
sudo ufw allow 443
sudo ufw allow 873
sudo ufw enable
sudo ufw status
sudo ufw allow http
sudo ufw allow https

Add a whitelist for your IP (use http://icanhazip.com/ to get your IP) to ensure you won’t get kicked out of your server.

sudo ufw allow from 123.123.123.123/24 to any port 22

More help here.  Here is a  good guide on ufw commands. Info on port numbers here.

https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

If you don’t have a  Digital Ocean server for $5 a month click here and if a $2.5 a month Vultr server here.

Backups

rsync is a good way to copy files to another server or use Bacula

sudo apt install bacula

Basics

Initial server setup guide (Digital Ocean).

Sudo (admin user)

Read this guide on the Linux sudo command (the equivalent if run as administrator on Windows).

Users

List users on an Ubuntu OS (or compgen -u)

cut -d: -f1 /etc/passwd

Common output

cut -d: -f1 /etc/passwd
root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy
www-data
backup
list
irc
gnats
nobody
systemd-timesync
systemd-network
systemd-resolve
systemd-bus-proxy
syslog
_apt
lxd
messagebus
uuidd
dnsmasq
sshd
pollinate
ntp
mysql
clamav

Add User

sudo adduser new_username

e.g

sudo adduser bob
Adding user `bob' ...
Adding new group `bob' (1000) ...
Adding new user `bob' (1000) with group `bob' ...
Creating home directory `/home/bob' ...
etc..

Add user to a group

sudo usermod -a -G MyGroup bob

Show users in a group

getent group MyGroup | awk -F: '{print $4}'

This will show users in a group

Remove a user

sudo userdel username
sudo rm -r /home/username

Rename user

usermod -l new_username old_username

Change user password

sudo passwd username

Groups

Show all groups

compgen -ug

Common output

compgen -g
root
daemon
bin
sys
adm
tty
disk
lp
mail
proxy
sudo
www-data
backup
irc
etc

You can create your own groups but first, you must be aware of group ids

cat /etc/group

Then you can see your systems groups and ids.

Create a group

groupadd -g 999 MyGroup

Permissions

Read this https://help.ubuntu.com/community/FilePermissions

How to list users on Ubuntu.

Read more on setting permissions here.

Chmod help can be found here.

Install Fail2Ban

I used this guide on installing Fail2Ban.

apt-get install fail2ban

Check Fail2Ban often and add blocks to the firewall of known bad IPs

fail2ban-client status

Best practices

Ubuntu has a guide on basic security setup here.

Startup Processes

It is a good idea to review startup processes from time to time.

sudo apt-get install rcconf
sudo rcconf

Accounts

  • Read up on the concept of least privilege access for apps and services here.
  • Read up on chmod permissions.

Updates

Do update your operating system often.

sudo apt-get update
sudo apt-get upgrade

Minimal software

Only install what software you need

Exploits and Keeping up to date

Do keep up to date with exploits and vulnerabilities

  • Follow 0xDUDE on twitter.
  • Read the GDI.Foundation page.
  • Visit the Exploit Database
  • Vulnerability & Exploit Database
  • Subscribe to the Security Now podcast.

Secure your applications

  • NodeJS: Enable logging in applications you install or develop.

Ban repeat Login attempts with FailBan

Fail2Ban config

sudo nano /etc/fail2ban/jail.conf
[sshd]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3

Hosts File Hardening

sudo nano /etc/host.conf

Add

order bind,hosts
nospoof on

Add a whitelist with your ip on /etc/fail2ban/jail.conf (see this)

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not                          
# ban a host which matches an address in this list. Several addresses can be                             
# defined using space separator.
                                                                         
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8

Restart the service

sudo service fail2ban restart
sudo service fail2ban status

Intrusion detection (logging) systems

Tripwire will not block or prevent intrusions but it will log and give you a heads up with risks and things of concern

Install Tripwire.

sudo apt-get install tiger tripwire

Running Tripwire

sudo tiger

This will scan your system for issues of note

sudo tiger
Tiger UN*X security checking system
   Developed by Texas A&M University, 1994
   Updated by the Advanced Research Corporation, 1999-2002
   Further updated by Javier Fernandez-Sanguino, 2001-2015
   Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
   Covered by the GNU General Public License (GPL)

Configuring...

Will try to check using config for 'x86_64' running Linux 4.4.0-89-generic...
--CONFIG-- [con005c] Using configuration files for Linux 4.4.0-89-generic. Using
           configuration files for generic Linux 4.
Tiger security scripts *** 3.2.3, 2008.09.10.09.30 ***
20:42> Beginning security report for simon.
20:42> Starting file systems scans in background...
20:42> Checking password files...
20:42> Checking group files...
20:42> Checking user accounts...
20:42> Checking .rhosts files...
20:42> Checking .netrc files...
20:42> Checking ttytab, securetty, and login configuration files...
20:42> Checking PATH settings...
20:42> Checking anonymous ftp setup...
20:42> Checking mail aliases...
20:42> Checking cron entries...
20:42> Checking 'services' configuration...
20:42> Checking NFS export entries...
20:42> Checking permissions and ownership of system files...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Checking for indications of break-in...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Performing rootkit checks...
20:42> Performing system specific checks...
20:46> Performing root directory checks...
20:46> Checking for secure backup devices...
20:46> Checking for the presence of log files...
20:46> Checking for the setting of user's umask...
20:46> Checking for listening processes...
20:46> Checking SSHD's configuration...
20:46> Checking the printers control file...
20:46> Checking ftpusers configuration...
20:46> Checking NTP configuration...
20:46> Waiting for filesystems scans to complete...
20:46> Filesystems scans completed...
20:46> Performing check of embedded pathnames...
20:47> Security report completed for simon.
Security report is in `/var/log/tiger/security.report.simon.170809-20:42'.

My Output.

sudo nano /var/log/tiger/security.report.username.170809-18:42

Security scripts *** 3.2.3, 2008.09.10.09.30 ***
Wed Aug  9 18:42:24 AEST 2017
20:42> Beginning security report for username (x86_64 Linux 4.4.0-89-generic).

# Performing check of passwd files...
# Checking entries from /etc/passwd.
--WARN-- [pass014w] Login (bob) is disabled, but has a valid shell.
--WARN-- [pass014w] Login (root) is disabled, but has a valid shell.
--WARN-- [pass015w] Login ID sync does not have a valid shell (/bin/sync).
--WARN-- [pass012w] Home directory /nonexistent exists multiple times (3) in
         /etc/passwd.
--WARN-- [pass012w] Home directory /run/systemd exists multiple times (2) in
         /etc/passwd.
--WARN-- [pass006w] Integrity of password files questionable (/usr/sbin/pwck
         -r).

# Performing check of group files...

# Performing check of user accounts...
# Checking accounts from /etc/passwd.
--WARN-- [acc021w] Login ID dnsmasq appears to be a dormant account.
--WARN-- [acc022w] Login ID nobody home directory (/nonexistent) is not
         accessible.

# Performing check of /etc/hosts.equiv and .rhosts files...

# Checking accounts from /etc/passwd...

# Performing check of .netrc files...

# Checking accounts from /etc/passwd...

# Performing common access checks for root (in /etc/default/login, /securetty, and /etc/ttytab...
--WARN-- [root001w] Remote root login allowed in /etc/ssh/sshd_config

# Performing check of PATH components...
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
# Only checking user 'root'

# Performing check of anonymous FTP...

# Performing checks of mail aliases...
# Checking aliases from /etc/aliases.

# Performing check of `cron' entries...
--WARN-- [cron005w] Use of cron is not restricted

# Performing check of 'services' ...
# Checking services from /etc/services.
--WARN-- [inet003w] The port for service ssmtp is also assigned to service
         urd.
--WARN-- [inet003w] The port for service pipe-server is also assigned to
         service search.

# Performing NFS exports check...

# Performing check of system file permissions...
--ALERT-- [perm023a] /bin/su is setuid to `root'.
--ALERT-- [perm023a] /usr/bin/at is setuid to `daemon'.
--ALERT-- [perm024a] /usr/bin/at is setgid to `daemon'.
--WARN-- [perm001w] The owner of /usr/bin/at should be root (owned by daemon).
--WARN-- [perm002w] The group owner of /usr/bin/at should be root.
--ALERT-- [perm023a] /usr/bin/passwd is setuid to `root'.
--ALERT-- [perm024a] /usr/bin/wall is setgid to `tty'.

# Checking for known intrusion signs...
# Testing for promiscuous interfaces with /bin/ip
# Testing for backdoors in inetd.conf

# Performing check of files in system mail spool...

# Performing check for rookits...
# Running chkrootkit (/usr/sbin/chkrootkit) to perform further checks...
--WARN-- [rootkit004w] Chkrootkit has detected a possible rootkit installation
Possible Linux/Ebury - Operation Windigo installetd

# Performing system specific checks...
# Performing checks for Linux/4...

# Checking boot loader file permissions...
--WARN-- [boot02] The configuration file /boot/grub/menu.lst has group
         permissions. Should be 0600
--FAIL-- [boot02] The configuration file /boot/grub/menu.lst has world
         permissions. Should be 0600
--WARN-- [boot06] The Grub bootloader does not have a password configured.

# Checking for vulnerabilities in inittab configuration...

# Checking for correct umask settings for init scripts...
--WARN-- [misc021w] There are no umask entries in /etc/init.d/rcS

# Checking Logins not used on the system ...

# Checking network configuration
--FAIL-- [lin013f] The system is not protected against Syn flooding attacks
--WARN-- [lin017w] The system is not configured to log suspicious (martian)
         packets

# Verifying system specific password checks...

# Checking OS release...
--WARN-- [osv004w] Unreleased Debian GNU/Linux version `stretch/sid'

# Checking installed packages vs Debian Security Advisories...

# Checking md5sums of installed files

# Checking installed files against packages...
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/udev/hwdb.bin' does not belong to any package.

# Performing check of root directory...

# Checking device permissions...
--WARN-- [dev003w] The directory /dev/block resides in a device directory.
--WARN-- [dev003w] The directory /dev/char resides in a device directory.
--WARN-- [dev003w] The directory /dev/cpu resides in a device directory.
--FAIL-- [dev002f] /dev/fuse has world permissions
--WARN-- [dev003w] The directory /dev/hugepages resides in a device directory.
--FAIL-- [dev002f] /dev/kmsg has world permissions
--WARN-- [dev003w] The directory /dev/lightnvm resides in a device directory.
--WARN-- [dev003w] The directory /dev/mqueue resides in a device directory.
--FAIL-- [dev002f] /dev/rfkill has world permissions
--WARN-- [dev003w] The directory /dev/vfio resides in a device directory.

# Checking for existence of log files...
--FAIL-- [logf005f] Log file /var/log/btmp permission should be 660
--FAIL-- [logf007f] Log file /var/log/messages does not exist

# Checking for correct umask settings for user login shells...
--WARN-- [misc021w] There is no umask definition for the dash shell
--WARN-- [misc021w] There is no umask definition for the bash shell

# Checking symbolic links...

# Performing check of embedded pathnames...
20:47> Security report completed for username.

More on Tripwire here.

Hardening PHP

Hardening PHP config (and backing the PHP config it up), first create an info.php file in your website root folder with this info

<?php
phpinfo()
?>

Now look for what PHP file is loadingPHP Config

Back that your PHP config file

TIP: Delete the file with phpinfo() in it as it is a security risk to leave it there.

TIP: Read the OWASP cheat sheet on using PHP securely here and securing php.ini here.

Some common security changes

file_uploads = On
expose_php = Off
error_reporting = E_ALL
display_errors          = Off
display_startup_errors  = Off
log_errors              = On
error_log = /php_errors.log
ignore_repeated_errors  = Off

Don’t forget to review logs, more config changes here.

Antivirus

Yes, it is a good idea to run antivirus in Ubuntu, here is a good list of antivirus software

I am installing ClamAV as it can be installed on the command line and is open source.

sudo apt-get install clamav

ClamAV help here.

Scan a folder

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Setup auto-update antivirus definitions

sudo dpkg-reconfigure clamav-freshclam

I set auto updates 24 times a day (every hour) via daemon updates.

tip: Download manual antivirus update definitions. If you only have a 512MB server your update may fail and you may want to stop fresh claim/php/nginx and mysql before you update to ensure the antivirus definitions update. You can move this to a con job and set this to update at set times over daemon to ensure updates happen.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v
Current working dir is /var/lib/clamav
Max retries == 5
ClamAV update process started at Tue Aug  8 22:22:02 2017
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1152
Software version from DNS: 0.99.2
Retrieving http://db.au.clamav.net/main.cvd
Trying to download http://db.au.clamav.net/main.cvd (IP: 193.1.193.64)
Downloading main.cvd [100%]
Loading signatures from main.cvd
Properly loaded 4566249 signatures from new main.cvd
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Querying main.58.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/daily.cvd
Trying to download http://db.au.clamav.net/daily.cvd (IP: 193.1.193.64)
Downloading daily.cvd [100%]
Loading signatures from daily.cvd
Properly loaded 1742284 signatures from new daily.cvd
daily.cvd updated (version: 23644, sigs: 1742284, f-level: 63, builder: neo)
Querying daily.23644.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/bytecode.cvd
Trying to download http://db.au.clamav.net/bytecode.cvd (IP: 193.1.193.64)
Downloading bytecode.cvd [100%]
Loading signatures from bytecode.cvd
Properly loaded 66 signatures from new bytecode.cvd
bytecode.cvd updated (version: 308, sigs: 66, f-level: 63, builder: anvilleg)
Querying bytecode.308.82.1.0.C101C140.ping.clamav.net
Database updated (6308599 signatures) from db.au.clamav.net (IP: 193.1.193.64)

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart 

sudo /etc/init.d/clamav-freshclam start

Manual scan with a bash script

Create a bash script

mkdir /script
sudo nano /scripts/updateandscanav.sh

# Include contents below.
# Save and quit

chmod +X /scripts/updateandscanav.sh

Bash script contents to update antivirus definitions.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart

sudo /etc/init.d/clamav-freshclam start

sudo clamscan --max-filesize=3999M --max-scansize=3999M -v -r /

Edit the crontab to run the script every hour

crontab -e
1 * * * * /bin/bash /scripts/updateandscanav.sh > /dev/null 2>&1

Uninstalling Clam AV

You may need to uninstall Clamav if you don’t have a lot of memory or find updates are too big.

sudo apt-get remove --auto-remove clamav
sudo apt-get purge --auto-remove clamav

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Other

  • Read this awesome guide.
  • install Fail2Ban
  • Do check your log files if you suspect suspicious activity.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.92 added hardening a linux server link

Filed Under: Ads, Advice, Analitics, Analytics, Android, API, App, Apple, Atlassian, AWS, Backup, BitBucket, Blog, Business, Cache, Cloud, Community, Computer, CoronaLabs, Cost, CPI, DB, Development, Digital Ocean, DNS, Domain, Email, Feedback, Firewall, Free, Git, GitHub, GUI, Hosting, Investor, IoT, JIRA, LetsEncrypt, Linux, Malware, Marketing, mobile app, Monatization, Monetization, MongoDB, MySQL, Networking, NGINX, NodeJS, NoSQL, OS, Planning, Project, Project Management, Psychology, push notifications, Raspberry Pi, Redis, Route53, Ruby, Scalability, Scalable, Security, SEO, Server, Share, Software, ssl, Status, Strength, Tech Advice, Terminal, Transfer, Trello, Twitter, Ubuntu, Uncategorized, Video Editing, VLOG, VM, Vultr, Weakness, Web Design, Website, Wordpress Tagged With: antivirus, brute force, Firewall

Using phpservermonitor.org to check whether your websites and servers are up and running

July 30, 2017 by Simon

https://www.phpservermonitor.org/ – PHP Server Monitor is a script that checks whether your websites and servers are up and running. It comes with a web based user interface where you can manage your services and websites, and you can manage users for each server with a mobile number and email address.

Features

  • Monitor services and websites (see below).
  • Email, SMS and Pushover notifications.
  • View history graphs of uptime and latency.
  • User authentication with 2 levels (administrator and regular user).
  • Logs of connection errors, outgoing emails and text messages.
  • Easy cronjob implementation to automatically check your servers.

FYI you can setup an Ubuntu Vutur VM here (my guide here) or a Digital Ocean server here (my guide here)  in minutes (and only be charged by the hour). Vultr VMs can be purchased from as low a $2.5 a month (NY location) and Digital Ocean for $5 a month.

Open Source

PHP Server Monitor is an open source project 🙂

https://github.com/phpservermon/phpservermon

Installation

fyi: Installation instructions are located here.  More detailed install instructions can be found in the zip file under docs/install.rst.

Go to https://www.phpservermonitor.org/download/ and download the  2.4MB phpservermon-3.2.0.zip then extract it’s 1,0834 items.

Upload the files to your website.

Run the install script https://thesubdomain.thedomain.com/phpservermon-3.2.0/install.php then follow the prompts.

I have already set my time zone so I’ll ignore this warning.

If you want to change the time zone run this command.

sudo hwclock --show
dpkg-reconfigure tzdata
sudo reboot
sudo hwclock --show

Then add the database details. I created the MySQL database and user using the Adminer utility.

I created a config.php as instructed.

<?php
define('PSM_DB_HOST', 'localhost');
define('PSM_DB_PORT', '3306');
define('PSM_DB_NAME', 'thedatabase');
define('PSM_DB_USER', 'thedatabaseuser');
define('PSM_DB_PASS', 'removed');
define('PSM_DB_PREFIX', 'psm_');
define('PSM_BASE_URL', 'https://thesubdomain.thedomain.com/phpservermon-3.2.0');
?>

Create an account.

Installation Success.

I logged into the pro server monitor webpage that I just installed.

Configuration

I logged into the PHP Server monitor and configured a website to monitor ( at /phpservermon-3.2.0/?&mod=server&action=edit ).

I added this string to the HTML source of the webpages pages to monitor.

<!-- phpservermoncheckforthis -->

I added a few websites to monitor.

Other

Here are the other things you can monitor

Table of objects to monitor

Here is my tale of objects to monitor,

Here is a table of my active servers being monitored (I am monitoring 3x web page content and IP pings).

One is failing because the page does not contain the string I defined 🙂

Integration with custom status pages

todo.

Configure SMS Alerts

The config screen has multiple SMS providers to choose from.


Configure Pushover Alerts

The config screens have links to create a pushover alerts app.


Automation

todo:  Review crontab.

Recent Feaures

  • SSL expiration checks

Conclusion

I am happy with the way PHP Server monitor easily monitors my websites.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.3 added screenshots of SMS and pushover  (6:04pm 30th July 2017 AEST)

Filed Under: AWS, Cloud, Digital Ocean, Domain, Hosting, Linux, Status, VM Tagged With: monitor, server, status

Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute

July 29, 2017 by Simon

I visited https://letsencrypt.org/ where it said Let’s Encrypt is a free, automated, and open SSL Certificate Authority. That sounds great, time to check them out. This may not take 1 minute on your server but it did on mine (a self-managed Ubuntu 16.04/NGINX server). If you are not sure why you need an SSL cert read Life Is About to Get a Whole Lot Harder for Websites Without Https from Troy hunt.

FYI you can set up an Ubuntu Vutur VM here (my guide here) for as low as $2.5 a month or a Digital Ocean VM server here (my guide here) for $5 a month, billing is charged to the hour and is cheap as chips.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

But for the best performing server read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here). Also read my recent post on setting up Lets Encrypt on sub domains.

I clicked Get Started and read the Getting started guide. I was redirected to https://certbot.eff.org/ where it said: “Automatically enable HTTPS on your website with EFF’s Certbot, deploying Let’s Encrypt certificates.“. I was asked what web server and OS I use..

I confirmed my Linux version

lsb_release -a

Ensure your NGINX is setup (read my Vultr guide here) and you have a”server_name” specified in the “/etc/nginx/sites-available/default” file.

e.g

server_name yourdomain.com www.yourdomain.com;

I also like to set “root” to “/www” in the NGINX configuration.

e.g

root /www;

Tip: Ensure the www folder is set up first and has ownership.

mkdir /www
sudo chown -R www-data:www-data /www

Also, make and verify the contents of a /www /index.html file.

echo "Hello World..." > /www/index.html && cat /www/index.html

I then selected my environment on the site (NGINX and Ubuntu 16.04) and was redirected to the setup instructions.

FYI: I will remove mention of my real domain and substitute with thesubdomain.thedomain.com for security in the output below.

I was asked to run these commands

sudo apt-get update
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-nginx

Detailed instructions here.

Obtaining an SSL Certificate

I then ran the following command to automatically obtain and install (configure NGINX) an SSL certificate.

sudo certbot --nginx

Output

sudo certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel):Invalid email address: .
Enter email address (used for urgent renewal and security notices)  If you
really want to skip this, you can run the client with
--register-unsafely-without-email but make sure you then backup your account key
from /etc/letsencrypt/accounts   (Enter 'c' to cancel): [email protected]

-------------------------------------------------------------------------------
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.1.1-August-1-2016.pdf. You must agree
in order to register with the ACME server at
https://acme-v01.api.letsencrypt.org/directory
-------------------------------------------------------------------------------
(A)gree/(C)ancel: A

-------------------------------------------------------------------------------
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about EFF and
our work to encrypt the web, protect its users and defend digital rights.
-------------------------------------------------------------------------------
(Y)es/(N)o: Y

Which names would you like to activate HTTPS for?
-------------------------------------------------------------------------------
1: thesubdomain.thedomain.com
-------------------------------------------------------------------------------
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel):1
Obtaining a new certificate
Performing the following challenges:
tls-sni-01 challenge for thesubdomain.thedomain.com
Waiting for verification...
Cleaning up challenges
Deployed Certificate to VirtualHost /etc/nginx/sites-enabled/default for set(['thesubdomain.thedomain.com', 'localhost'])
Please choose whether HTTPS access is required or optional.
-------------------------------------------------------------------------------
1: Easy - Allow both HTTP and HTTPS access to these sites
2: Secure - Make all requests redirect to secure HTTPS access
-------------------------------------------------------------------------------
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/default

-------------------------------------------------------------------------------
Congratulations! You have successfully enabled https://thesubdomain.thedomain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=thesubdomain.thedomain.com
-------------------------------------------------------------------------------

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at
   /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem. Your cert will expire on 2017-10-27. To obtain a new or tweaked version
   of this certificate in the future, simply run certbot again with
   the "certonly" option. To non-interactively renew *all* of your
   certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

That was the easiest SSL cert generation in history.

SSL Certificate Renewal (dry run)

sudo certbot renew --dry-run

Saving debug log to /var/log/letsencrypt/letsencrypt.log

-------------------------------------------------------------------------------
Processing /etc/letsencrypt/renewal/thesubdomain.thedomain.com.conf
-------------------------------------------------------------------------------
Cert not due for renewal, but simulating renewal for dry run
Renewing an existing certificate
Performing the following challenges:
tls-sni-01 challenge for thesubdomain.thedomain.com
Waiting for verification...
Cleaning up challenges

-------------------------------------------------------------------------------
new certificate deployed with reload of nginx server; fullchain is
/etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem
-------------------------------------------------------------------------------
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)

IMPORTANT NOTES:
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.

SSL Certificate Renewal (Live)

certbot renew

The Lets Encrypt SSL certificate is only a 90-day certificate.

Again: The Lets Encrypt SSL certificate is only a 90-day certificate.

I’ll run “certbot renew” again 2 months time to manually renew the certificate (and configure my higher security configuration (see below)).

Certbot NGINX Config renew (what did it do)

It’s nice to see forces HTTPS added to the configuration

if ($scheme != "https") {
   return 301 https://$host$request_uri;
} # managed by Certbot

Cert stuff added

    listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/thesubdomain.thedomain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/thesubdomain.thedomain.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

Contents of /etc/letsencrypt/options-ssl-nginx.conf

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;

ssl_ciphers "ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS";

This contains too many legacy cyphers for my liking.

I changed /etc/letsencrypt/options-ssl-nginx.conf to tighten ciphers and add TLS 1.3 (as my NGINX Supports it).

ssl_session_cache shared:le_nginx_SSL:1m;
ssl_session_timeout 1440m;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

Enabling OCSP Stapling and Strict Transport Security in NGINX

I add the following to /etc/nginx/sites/available/default

# OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

Restart NGINX.

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

SSL Labs SSL Score

I am happy with this.

Read my guide on Beyond SSL with Content Security Policy, Public Key Pinning etc

Automatic SSL Certificate Renewal

There are ways to auto renew the SSL certs floating around YouTube but I’ll stick to manual issue and renewals of SSL certificates.

SSL Checker Reports

‘I checked the certificate with other SSL checking sites.

NameCheap SSL Checker – https://decoder.link/sslchecker/ (Passed). I did notice that the certificate will expire in 89 days (I was not aware of that). I guess a free 90-day certificate for a noncritical server is OK (as long as I renew it in time).

CertLogik – https://certlogik.com/ssl-checker/ (OK)

Comodo – https://sslanalyzer.comodoca.com (OK)

Lets Encrypt SSL Certificate Pros

  • Free.
  • Secure.
  • Easy to install.
  • Easy to renew.
  • Good for local, test or development environments.
  • It auto-detected my domain name (even a subdomain)

Lets Encrypt SSL Certificate Cons

  • The auto install process does not setup OCSP Stapling (I configured NGINX but the certificate does not support it may be to limit the Certificate Authority resources handing the certificate revocation checks).
  • The auto install process does not setup HSTS. (I enabled it in NGINX manually).
  • The auto install process does not setup HPKP. More on enabling Public Key Pinning in NGINX here.
  • Too many cyphers installed by default.
  • No TLS 1.3 installed by default by the install process in my NGINX config in the default certbot secure auto install (even though my NGINX supports it). More on enabling TLS 1.3 in NGINX here.

Read my guide on Beyond SSL with Content Security Policy, Public Key Pinning etc

I’d recommend you follow these Twitter security users

http://twitter.com/GibsonResearch

https://twitter.com/troyhunt

https://twitter.com/0xDUDE

Troubleshooting

I had one server were certbot failed to verify the SSL and said I needed a public routable IP (it was) and that the firewall needed to be disabled (it was). I checked the contents of “/etc/nginx/sites-available/default” and it appeared no additional SSL values were added (not even listening on port 443?????).

Certbot Error

I am viewing: /var/log/letsencrypt/letsencrypt.log

Forcing Certificate Renewal 

Run the following command to force a certificate to renew outside the crontab renewal window.

certbot renew --force-renew

Conclusion

Free is free but I’d still use paid certs from Namecheap for important stuff/sites, not having OCSP stapling on the CA and 90-day certs is a deal breaker for me. The Lets Encrypt certificate is only a 90-day certificate (I’d prefer a 3-year certificate).

A big thank you to Electronic Frontier Foundation for making this possible and providing a free service (please donate to them)..

Lets Encrypt does recommend you renew certs every 60 days or use auto-renew tools but rate limits are in force and Lets Encrypt admit their service is young (will they stick around)? Even Symantec SSL certs are at risk.

Happy SSL’ing.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

fyi, I followed this guide setting up Let’s Encrypt on Ubuntu 18.04.

Read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.8 Force Renew Command

v1.7 Ubuntu 18.94 info

V1.62 added hardening Linux server link

Filed Under: AWS, Cloud, Cost, Digital Ocean, LetsEncrypt, ssl, Ubuntu, VM, Vultr Tagged With: free, lets encrypt, ssl certificate

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) IoT (9) LetsEncrypt (7) Linux (21) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) Performance (6) PHP (13) Scalability (12) Scalable (14) Security (45) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (45) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT