• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

ubuntu

Installing Redis 3.x onto Ubuntu 16.04

September 7, 2017 by Simon

This post will show you how to install Redis 3.x onto Ubuntu 16.04

Redis is a server-side (Lua based) schema-free open source (in Memory) NoSQL Key/Value store database that supports replication between servers. Redis has an Eventual Consistency replication between servers (over Immediate Consistency) between servers. Eventual consistency is evil in some peoples minds, eventual consistency does require different coding considerations (to guaranteed valid data but does offer speed benefits).

Redis is the worlds 9th most popular database behind Oracle, MySQL, Microsoft SQL Server, PostgreSQL, MongoDB, DB2, Microsoft Access, Cassandra. Redis is the most popular key-value store database. View the database trend chart here.

When to use Redis over MySQL or MongoDB

Here is a great guide on when to use Redis over MongoDB. Read my previous guide on building a scalable and secure MySQL Cluster.

Redis has a place where a relational database or NoSQL document stores do not (but it is more work). Read more here.

Redis is great in situations where caching or where memory is available on the server.

grep MemTotal /proc/meminfo
MemTotal:        4046404 kB

grep MemFree /proc/meminfo
MemFree:         3559244 kB

Free Memory

free -m
              total        used        free      shared  buff/cache   available
Mem:           3951         241        3213           9         496        3474

Installing Redis on Ubuntu

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install redis-server

Backup the conf file

cp /etc/redis/redis.conf /etc/redis/redis.conf.bak

Open the redis cli (and test it).

redis-cli
127.0.0.1:6379> ping
PONG

Show Redis info (from the redis-cli)

redis-cli
127.0.0.1:6379> info
# Server
redis_version:3.0.6
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:687a2a319020fa42
redis_mode:standalone
os:Linux 4.4.0-116-generic x86_64
arch_bits:64
multiplexing_api:epoll
gcc_version:5.3.1
process_id:17920
run_id:6b657bf624b1a91f7ba40c4c8a693024ca88d887
tcp_port:6379
uptime_in_seconds:34
uptime_in_days:0
hz:10
lru_clock:11819524
config_file:/etc/redis/redis.conf

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:508784
used_memory_human:496.86K
used_memory_rss:6905856
used_memory_peak:508784
used_memory_peak_human:496.86K
used_memory_lua:36864
mem_fragmentation_ratio:13.57
mem_allocator:jemalloc-3.6.0

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1521768930
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok

# Stats
total_connections_received:2
total_commands_processed:1
instantaneous_ops_per_sec:0
total_net_input_bytes:28
total_net_output_bytes:7
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0

# Replication
role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:0.04
used_cpu_user:0.02
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Cluster
cluster_enabled:0

# Keyspace

Benchmarking your Redis (quick)

redis-benchmark -h 127.0.0.1 -p 6379 -t set,lpush -n 100000 -q
SET: 90252.70 requests per second
LPUSH: 130548.30 requests per second

Benchmarking your Redis (slow)

redis-benchmark -n 100000
====== PING_INLINE ======
  100000 requests completed in 1.34 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

97.65% <= 1 milliseconds
99.96% <= 4 milliseconds
100.00% <= 4 milliseconds
74794.31 requests per second

====== PING_BULK ======
  100000 requests completed in 1.02 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.74% <= 1 milliseconds
100.00% <= 1 milliseconds
98231.83 requests per second

====== SET ======
  100000 requests completed in 0.93 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.99% <= 1 milliseconds
100.00% <= 1 milliseconds
107066.38 requests per second

====== GET ======
  100000 requests completed in 1.03 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.76% <= 1 milliseconds
99.95% <= 3 milliseconds
100.00% <= 4 milliseconds
96618.36 requests per second

====== INCR ======
  100000 requests completed in 0.87 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.85% <= 1 milliseconds
100.00% <= 1 milliseconds
115340.26 requests per second

====== LPUSH ======
  100000 requests completed in 0.78 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.89% <= 1 milliseconds
100.00% <= 1 milliseconds
128205.13 requests per second

====== LPOP ======
  100000 requests completed in 0.81 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.61% <= 1 milliseconds
100.00% <= 1 milliseconds
124069.48 requests per second

====== SADD ======
  100000 requests completed in 1.38 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

97.02% <= 1 milliseconds
99.97% <= 2 milliseconds
99.97% <= 3 milliseconds
100.00% <= 3 milliseconds
72516.32 requests per second

====== SPOP ======
  100000 requests completed in 1.40 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

98.36% <= 1 milliseconds
99.85% <= 4 milliseconds
99.85% <= 5 milliseconds
99.92% <= 6 milliseconds
99.95% <= 10 milliseconds
100.00% <= 10 milliseconds
71326.68 requests per second

====== LPUSH (needed to benchmark LRANGE) ======
  100000 requests completed in 1.27 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

95.40% <= 1 milliseconds
99.84% <= 2 milliseconds
99.85% <= 4 milliseconds
99.92% <= 5 milliseconds
99.98% <= 6 milliseconds
100.00% <= 6 milliseconds
78492.93 requests per second

====== LRANGE_100 (first 100 elements) ======
  100000 requests completed in 2.16 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

95.95% <= 1 milliseconds
99.81% <= 2 milliseconds
99.82% <= 3 milliseconds
99.84% <= 4 milliseconds
99.90% <= 5 milliseconds
99.93% <= 7 milliseconds
99.94% <= 8 milliseconds
99.98% <= 10 milliseconds
100.00% <= 10 milliseconds
46382.19 requests per second

====== LRANGE_300 (first 300 elements) ======
  100000 requests completed in 5.49 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

1.99% <= 1 milliseconds
91.78% <= 2 milliseconds
98.61% <= 3 milliseconds
99.65% <= 4 milliseconds
99.81% <= 5 milliseconds
99.90% <= 6 milliseconds
99.96% <= 9 milliseconds
99.96% <= 10 milliseconds
99.98% <= 11 milliseconds
100.00% <= 11 milliseconds
18221.57 requests per second

====== LRANGE_500 (first 450 elements) ======
  100000 requests completed in 8.79 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.04% <= 1 milliseconds
60.33% <= 2 milliseconds
86.38% <= 3 milliseconds
97.11% <= 4 milliseconds
99.24% <= 5 milliseconds
99.54% <= 6 milliseconds
99.69% <= 7 milliseconds
99.80% <= 8 milliseconds
99.87% <= 9 milliseconds
99.90% <= 10 milliseconds
99.91% <= 11 milliseconds
99.92% <= 12 milliseconds
99.93% <= 13 milliseconds
99.98% <= 14 milliseconds
99.99% <= 15 milliseconds
100.00% <= 16 milliseconds
100.00% <= 16 milliseconds
11381.74 requests per second

====== LRANGE_600 (first 600 elements) ======
  100000 requests completed in 9.66 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

0.01% <= 1 milliseconds
3.03% <= 2 milliseconds
94.78% <= 3 milliseconds
98.32% <= 4 milliseconds
98.90% <= 5 milliseconds
99.32% <= 6 milliseconds
99.60% <= 7 milliseconds
99.82% <= 8 milliseconds
99.92% <= 9 milliseconds
99.93% <= 10 milliseconds
99.94% <= 11 milliseconds
99.94% <= 12 milliseconds
99.95% <= 15 milliseconds
99.95% <= 16 milliseconds
99.95% <= 18 milliseconds
99.98% <= 19 milliseconds
99.99% <= 20 milliseconds
100.00% <= 20 milliseconds
10350.90 requests per second

====== MSET (10 keys) ======
  100000 requests completed in 1.22 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

97.69% <= 1 milliseconds
99.95% <= 5 milliseconds
100.00% <= 6 milliseconds
82101.80 requests per second

How to save and query a simple key/value integer

127.0.0.1:6379> set testvalue 123
OK
127.0.0.1:6379> get testvalue
"123"

How to save and query a simple key/value string

127.0.0.1:6379> set testvalue "Hello World"
OK
127.0.0.1:6379> get testvalue
"Hello World"

Saving and querying multiple values

127.0.0.1:6379> mset testvalue1 "a" testvalue2 "b"
OK
127.0.0.1:6379> mget testvalue1 testvalue2
1) "a"
2) "b"

Clear all

127.0.0.1:6379> flushall
OK

Add to list (new items at the top)

127.0.0.1:6379> lpush testlist "a"
(integer) 1
127.0.0.1:6379> lpush testlist "b"
(integer) 2
127.0.0.1:6379> lpush testlist "c"
(integer) 3
127.0.0.1:6379> lrange testlist 0 -1
1) "c"
2) "b"
3) "a"

Add to list (new items at the bottom)

127.0.0.1:6379> rpush testlist "a"
(integer) 1
127.0.0.1:6379> rpush testlist "b"
(integer) 2
127.0.0.1:6379> rpush testlist "c"
(integer) 3
127.0.0.1:6379> lrange testlist 0 -1
1) "a"
2) "b"
3) "c"

Redis Documentation

redis.io

redis.io/­documentation

https://redis.io/commands/

Redis Crash Course Tutorial

More

5 uses of redis as a database.

Using Redis at scale at Twitter

Digital Ocean guide on Redis

Coming soon PHP access to Redis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.2 Removed reference to Redis 4 in the blurb.

v1.1 Oops, this was Redis 3 and not 4 (thanks, Patrik)

v1.0 Initial version

Filed Under: DB, MongoDB, NoSQL, Redis, Server, Ubuntu Tagged With: dv, nosql, redis, ubuntu

How to setup and secure MongoDB on Ubuntu 16.04 and verify with Studio 3T

September 6, 2017 by Simon

Here is how you can install MongoDB 3.4.x on Ubuntu 16.04 and secure it. I use the Studio 3T software form Studio 3T.

Before you install MongoDB ensure you have secured your server and installed it and installed an SSL certificate. Click the links here to set up a Digital Ocean, Vultr or AWS Server. Read my old guide to installing MongoDB on Ubuntu 14.04  and using Studio 3T ( https://studio3t.com/ ).

Install MongoDB 3.6

Official guide here

Add the keyserver

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
Executing: /tmp/tmp.T7UNroLh1A/gpg.1.sh --keyserver
hkp://keyserver.ubuntu.com:80
--recv
2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
gpg: requesting key 91FA4AD5 from hkp server keyserver.ubuntu.com
gpg: key 91FA4AD5: public key "MongoDB 3.6 Release Signing Key <[email protected]>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.6 multiverse

Update

sudo apt-get update

Install the latest stable version of MongoDB

sudo apt-get install -y mongodb-org

MongoDB.conf options

FYI: Secure MongoDB: https://docs.mongodb.com/manual/security/ and Security Checklist https://docs.mongodb.com/manual/administration/security-checklist/

MongoDB Hardening Info: https://docs.mongodb.com/manual/core/security-hardening/

Start Mongodb

sudo mongod --port 27017 --dbpath /mongodb_data/ --bind_ip 127.0.0.1 --config /etc/mongod.conf

You should see startup activity

sudo mongod --port 27017 --dbpath /mongodb_data/
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] MongoDB starting : pid=10413 port=27017 dbpath=/mongodb_data/ 64-bit host=ypurservername
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] db version v3.6.2
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] git version: 489d177dbd0f0420a8ca04d39fd78d0a2c539420
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] allocator: tcmalloc
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] modules: none
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] build environment:
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten]     distmod: ubuntu1604
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten]     distarch: x86_64
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten]     target_arch: x86_64
2018-01-16T22:46:22.924+1100 I CONTROL  [initandlisten] options: { net: { port: 27017 }, storage: { dbPath: "/mongodb_data/" } }
2018-01-16T22:46:22.925+1100 I STORAGE  [initandlisten]
2018-01-16T22:46:22.925+1100 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1463M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),
2018-01-16T22:46:23.012+1100 I CONTROL  [initandlisten]
2018-01-16T22:46:23.014+1100 I STORAGE  [initandlisten] createCollection: admin.system.version with provided UUID: *******
2018-01-16T22:46:23.026+1100 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 3.6
2018-01-16T22:46:23.033+1100 I STORAGE  [initandlisten] createCollection: local.startup_log with generated UUID: d4271258-e51f-407d-af15-975f61e66eaf
2018-01-16T22:46:23.049+1100 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/mongodb_data/diagnostic.data'
2018-01-16T22:46:23.050+1100 I NETWORK  [initandlisten] waiting for connections on port 27017

In a new terminal window open a MongoDB process

mongo --port 27017 --authenticationDatabase 'admin' --username username --password password
>...
>

Create a user

> use admin
switched to db admin
> db.createUser(
...   {
...     user: "yourdbuser",
...     pwd: "**************************************************************",
...     roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
...   }
... )
Successfully added user: {
        "user" : "yourdbuser",
        "roles" : [
                {
                        "role" : "userAdminAnyDatabase",
                        "db" : "admin"
                }
        ]
}
>

Create a test db

show dbs
use testdb
s = { Name : "Test Value" }
{ "Name" : "Test Value" }
db.testdb.insert( s );
WriteResult({ "nInserted" : 1 })
db.testdb.find()
{ "_id" : ObjectId("5a5deba7b563981038f32051"), "Name" : "Test Value" }

Run MongoDB at Startup

Note: I tried setting up a service but it failed so I added the following command to /etc/rc.local

/usr/bin/mongod --quiet --port 27017 --dbpath /mongodb_data/ --bind_ip 127.0.0.1,##.##.##.## --config /etc/mongod.conf

Todo: Service Setup.

Ignore the following….

Create a service

sudo nano /etc/systemd/system/mongodb.service

I added

[Unit]
Description=High-performance, schema-free document-oriented database
After=network.target

[Service]
User=mongodb
ExecStart=/usr/bin/mongod --quiet --port 27017 --dbpath /mongodb_data/ --bind_ip 127.0.0.1,##.##.##.## --config /etc/mongod.conf

[Install]
WantedBy=multi-user.target

TIP: Bing to local 127.0.0.1 and also your external IP (if you have hardened and setup IP whitelists for the port).

Make /etc/init.d/mongod executable

sudo chmod +x /etc/init.d/mongod

Firewall

Configure your firewall (tip: whitelist your local development IP to allow your IP access etc)

sudo ufw allow from 123.123.123.12/32 to any port 27017
Rule added

sudo ufw reload
Firewall reloaded

sudo ufw allow out 27017
Rule added
Rule added (v6)

Reload the firewall and show the status. Add port 27017 (or your custom port) to your TCP and UDP firewall. http://icanhazip.com/ will display your public IP.

sudo ufw reload
sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
...
27017                      ALLOW       123.123.123.123
...
27017                      ALLOW OUT   Anywhere
...

Display the MongoDB version

mongod -version
db version v3.6.2
git version: 489d177dbd0f0420a8ca04d39fd78d0a2c539420
OpenSSL version: OpenSSL 1.0.2g 1 Mar 2016
allocator: tcmalloc
modules: none
build environment:
 distmod: ubuntu1604
 distarch: x86_64
 target_arch: x86_64

Make MongoDB configuration changes

sudo nano /etc/mongod.conf

– Consider saving your database data somewhere else (mongodb.conf)

storage:
  dbPath: /folder_to_store_mongodb_data/

– Consider redirecting your log file (mongodb.conf)

systemLog:
  destination: file
  logAppend: true
  path: "/folder_to_store_mongodb_logs/mongo.log"

– Consider changing the default MongoDB port (mongodb.conf)

net:
  port: 27123

– Allow MongoDB to talk locally and globally if need be and optionally enable IPV6 (binding IP’s in mongodb.conf)

TIP: Ensure you bind you localhost port (127.0.0.1) AND your public IP (e.g 123.123.123.123) as you will need to bind to public IP too if you want to connect to MongoDB externally.  I did not bing my eternal IP and was blocked for a few days.

net:
  ipv6: true
  bindIp: 127.0.0.1,123.123.123.123

If you allow external access then consider whitelisting your IP and disabling local admin login – more here  (mongodb.conf)

setParameter:
   enableLocalhostAuthBypass: false

Official configuration documentation can be found here.

At this stage, MongoDB is open to the world and if you connect to your server with no username or password you will see it is open.

mongodb openI created an admin user with Studio 3T for MongoDB in the IntelliShell.

Create MOngoDB USERI typed the following to create a user (I added a root role after creating the screenshot above).

use admin
db.createUser({user: "yourusername", pwd: "yourpassword", roles:[{role: "userAdminAnyDatabase", db: "admin"},{role: "root", db: "admin"}]})

Verify that the user was created

show users
{ 
    "_id" : "admin.yourusername", 
    "user" : "yourusername", 
    "db" : "admin", 
    "roles" : [
        {
            "role" : "root", 
            "db" : "admin"
        }, 
        {
            "role" : "userAdminAnyDatabase", 
            "db" : "admin"
        }
    ]
}

Now you can use these credentials to log in to the database.

use login details

You can see the new credentials are working and now we need to remove anonymous and empty connections.

Add the following to mongodb.conf

security:
 authorization: enabled

Restart MongoDB

sudo systemctl stop mongodb
sudo systemctl start mongodb

or

/usr/bin/mongod --config /etc/mongod.conf

Now when you connect to your database with no login details you will see no databases.

all secure

Show the status of MongoDB

sudo systemctl start mongodb
[email protected]:~# sudo systemctl status mongodb
● mongodb.service - High-performance, schema-free document-oriented database
   Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-08-18 18:00:38 AEST; 4 days ago
 Main PID: 6946 (mongod)
    Tasks: 19
   Memory: 76.3M
      CPU: 57min 16.367s
   CGroup: /system.slice/mongodb.service
           └─6946 /usr/bin/mongod --quiet --config /etc/mongod.conf

View the last 20 lines of the MongoDB log file.

tail -n 20 /var/log/mongodb/mongod.log

(or replace the path with your log location)

tail -n 20 /yourmongodb_logs/mongod.log

Viewing MongoDB files

ls -al
total 384
drwxr-xr-x 4 root root 4096 Sep 5 18:12 .
drwxr-xr-x 30 root root 4096 Aug 5 18:48 ..
-rw-r--r-- 1 root root 32768 Sep 5 18:12 collection-0--5805649544981213952.wt
-rw-r--r-- 1 root root 36864 Sep 5 18:12 collection-0-6117837641988028070.wt
-rw-r--r-- 1 root root 32768 Sep 5 18:12 collection-2-6117837641988028070.wt
drwxr-xr-x 2 root root 4096 Sep 5 18:12 diagnostic.data
-rw-r--r-- 1 root root 16384 Sep 5 18:47 index-1--5805649544981213952.wt
-rw-r--r-- 1 root root 36864 Sep 5 18:12 index-1-6117837641988028070.wt
-rw-r--r-- 1 root root 32768 Sep 5 18:12 index-3-6117837641988028070.wt
-rw-r--r-- 1 root root 32768 Sep 5 18:47 index-4-6117837641988028070.wt
drwxr-xr-x 2 root root 4096 Sep 5 18:03 journal
-rw-r--r-- 1 root root 32768 Sep 5 18:12 _mdb_catalog.wt
-rw-r--r-- 1 root root 0 Sep 5 18:12 mongod.lock
-rw-r--r-- 1 root root 36864 Sep 5 18:12 sizeStorer.wt
-rw-r--r-- 1 root root 95 Sep 5 18:18 storage.bson
-rw-r--r-- 1 root root 49 Sep 5 18:18 WiredTiger
-rw-r--r-- 1 root root 4096 Sep 5 18:12 WiredTigerLAS.wt
-rw-r--r-- 1 root root 21 Sep 5 18:18 WiredTiger.lock
-rw-r--r-- 1 root root 996 Sep 5 18:12 WiredTiger.turtle
-rw-r--r-- 1 root root 61440 Sep 5 18:12 WiredTiger.wt

MongoDB Users and Roles

More on securing MongoDB here and here.

Remove MongoDB

sudo apt-get remove --purge mongodb-org

Remove MongoDB files

cd /mongodb_data_folder/
rm -R *.*
rm -R WiredTiger
rm -R journal

Remove MongoDB

sudo rm /var/lib/mongodb/mongod.lock
sudo apt-get purge mongodb mongodb-clients mongodb-server mongodb-dev
sudo apt-get purge mongodb-10gen
sudo apt-get autoremove

I had these warnings with MongoDB 3.2

I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
I CONTROL  [initandlisten] **        We suggest setting it to 'never'

I ran this fix.

Installing MongoDB on Ubuntu 16.04 guide here.  More on SSL and MongoDB here.

todo: MongoDB Startup at reboot troubleshooting steps.

Good luck.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.6 Added MongoDB 3.6 install info and strat at startup

v1.5 MongoDB troubleshooting

Filed Under: MongoDB, Security, Ubuntu Tagged With: mongodb, secure, ubuntu

How to be alerted after system boot on Ubuntu 16.04 with an email via Gmail

September 5, 2017 by Simon

This will allow you to sent an email at startup on Ubuntu boot. You will need to ensure sendmail is setup and working (read my guide on How to send email via G Suite from Ubuntu in the cloud, setup an Ubuntu server in the cloud here (guide here)).

Create a  scripts folder

mkdir /scripts/

Create a file called /scripts/emailstartup.sh and add..

#!/bin/bash

echo "Dumping startup log";
journalctl -b0 --system _COMM=systemd --no-pager >/scripts/boot.log

#echo "Deleting old log file";
#rm -R /scripts/boot.zip

#echo "Zipping up Startup Log File.";
#zip -r -9 /scripts/boot.zip /scripts/boot.log

echo "Sending Email With Attachment";
# Ensure you have gmail or gsuite setup on your domain, guide here https://fearby.com/article/moving-a-cpanel-domain-with-email-to-a-self-managed-vps-and-gmail/
sendemail -f [email protected] -t [email protected] -u "Startup: $HOSTNAME server" -m "Attached are the startup logs for $HOSTNAME server" -s smtp.gmail.com:587 -o tls=yes -xu [email protected] -xp password -a /scripts/boot.log

optional: Uncomment lines above to attach a zip file instead of a log file (don’t forget to attach the zip instead of the log file in sendmail.)

Make the script file executable

sudo chmod +X /scripts/emailstartup.sh

Test the script

sudo /bin/bash /scripts/emailstartup.sh
Dumping startup log
Deleting old log file
Zipping up Startup Log File.
  adding: scripts/boot.log (deflated 91%)
Sending Email With Attachment
Sep 05 18:49:27 yourservernamehere sendemail[2606]: Email was sent successfully!

Add the following to crontab -e to ensure the script is executed 5 minutes after startup.

@reboot sleep 300 && /bin/bash /scripts/emailstartup.sh >> /dev/null 2>&1

On reboot, you will be emailed desired start-up information.

email capture

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.0 initial post

Filed Under: Email, OS, Server Tagged With: email, startup, ubuntu

Moving a CPanel domain with email to a self managed VPS and Gmail

August 3, 2017 by Simon

Below is my guide for moving away from NetRegistry CPanel domain to a self-managed server and GSuite email.

I have had www.fearby.com since 1999 on three CPanel hosts (superwerbhost in the US, Jumba in Australia, Uber in Australia (NetRegistry have acquired Uber and performance at the time of writing is terrible)). I was never informed by Uber of the sale but my admin portal was moved from one host to another and each time performance degraded. I tried to speed up WordPress by optimizing images, installing cache plugins but nothing worked, pages were loading in around 24 seconds on https://www.webpagetest.org.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I had issues with a CPanel domain on the hosts (Uber/Netregistry) as they were migrating domains and the NetRegstry chat rep said I needed to phone Uber for support. No thanks, I’m going self-managed and saving a dollar.

I decided to take ownership of my slow domain and setup my own VM and direct web traffic to it and redirect email to GMail (I have done this before).  I have setup Digital Ocean VM’s (Ubuntu and Centos), Vultr VM’s and AWS VM’s.

I have also had enough of Resource Limit Reached messages with CPanel and I can’t wait to…

  • not have a slow WordPress.
  • setup my own server (not a slow hosts server).
  • spend $5 less (we currently pay $25 for a CPanel website with 20GB storage total)
  • get a faster website (sub 24 seconds load time).
  • larger email mailboxes (30GB each).
  • Generate my own “SSL Labs A+ rated” certificate for $10 a year instead of $150 a year for an “SSL Labs C rated” SSL certificate from my existing hosts.

Backup

I have about 10 email accounts on my CPanel domain (using 14GB) and 2x WordPress sites.  I want to backup my emails with (Outlook Export and Thunderbird Profile backup) and backup my domain file(s) a few times before I do anything.  Once DNS is set in motion no server waits.

The Plan

Once everything is backed up I intend to setup a $5 a month Vulr VM and redirect all mail to Google G Suite (I have redirected mail before).

I will setup a Vultr web server in Sydney (following my guide here), buy an  SSL certificate from Namecheap and move my WordPress sites.

Rough Plan

  • Reduce email accounts from 10x to 3x
  • Backup emails (twice with ThunderBird and Outlook).
  • Setup A Ubuntu V on Vultr.
  • Signup for Google G Suite Trial.
  • Transfer my domain to Namecheap.
  • Link to domain DNS to Vultr
  • Link to domain MX records to Google Email.
  • Transfer website.
  • Setup emails on google.
  • Restore WordPress.
  • Go live.
  • Downgrade to personal G Suite before the trial expires
  • Close down the old server.

Signing up for Google G Suite

I visited https://gsuite.google.com/ and started creating an account.

Get 20% off your first year by signing up for Google G Suite using this link: https://goo.gl/6vpuMm

Screenshots of Google G Suite setup

I created a link between G Suite and an existing GMail account.

More screenshots of Google G suite setup

Now I can create the admin account.

Picture of G suite asking how i will log in

Tip: Don’t use any emails that are linked as secondary emails with any Google services (this won’t be allowed). It’s s a well-known issue that you cannot add users emails who are linked to Google services (even as backup emails for Gmail, detach the email before adding it). Read more here.

Google G suite did not like my email provided

Final setup steps.

Final G suite setup screenshots.

Now I can add email accounts to G Suite.

G Suite said im all ready to add users

Adding email users to G Suite.

G Suite adding users

The next thing I had to do was upload a file to my domain to verify I own the domain (DNS verification is also an option).

I must say the setup and verify steps are quite easy to follow on G Suite.

Time to backup our existing CPanel site.

Screenshot of Cpanel users

Backup Step 1 (hopefully I won’t need this)

I decided to grab a complete copy of my CPanel domain with domains, databases and email accounts. This took 24 hours.

CPanel backup screenshot

Backup Step 2 (hopefully I won’t need this)

I download all mail via IMAP in Outlook and Mozilla Thunderbird and export it (Outlook Export and Thunderbird Profile backup). Google have IMAP instructions here.

DNS Changes at Namecheap

I obtained my domain EPP code from my CPanel hosts and transferred the domain name to Namecheap.

Namecheap was even nice enough to set my DNS point to my existing domain so I did not have to rush a move before DNS propagation.

P.S The Namecheap Chat Staff and Namecheap  Mobile App is awesome.

NameCheap DNS

Having backed up everything I logged into Namecheap and set my DNS to “NameCheap BasicDNS” and then went “Advanced DNS” and set appropriate DNS records for my domain. This assumes you have setup a VM with IPV4 and IPV6 (follow my guide here).

  • A Record @ IPV4_OF_MY_VULTR_SERVER
  • A Record www IPV4_OF_MY_VULTR_SERVER
  • A Record ftp IPV4_OF_MY_VULTR_SERVER
  • AAAA Record @ IPV6_OF_MY_VULTR_SERVER
  • AAAA Record www IPV6_OF_MY_VULTR_SERVER
  • AAAA Record ftp IPV6_OF_MY_VULTR_SERVER
  • C Name www fearby.com

The Google G Suite also asked me to add these following MX records to the DNS records.

  • MX Record @ ASPMX.L.GOOGLE.COM. 1
  • MX Record @ ASPMX1.L.GOOGLE.COM. 5
  • MX Record @ ASPMX2.L.GOOGLE.COM. 5
  • MX Record @ ASPMX3.L.GOOGLE.COM. 10
  • MX Record @ ASPMX4.L.GOOGLE.COM. 10

Then it was a matter of telling Google DNS changes were made (once DNS has replicated across the US).

My advice is to set DNS changes before bed as it can take 12 hours.

Sites like https://www.whatsmydns.net/ are great for keeping track of DNS replication.

Transferring WordPress

I logged into the CPanel and exported my WordPress Database (34MB SQL file).

I had to make the following PHP.ini changes to allow the larger file size restore uploads with the Adminer utility (default is 2mb). I could not get the server side adminer.sls.gz option to restore the database?

post_max_size = 50M
upload_max_filesize = 50M

# do change back to 2MB after you restore the files to prevent DOS attacks.

I had to make the following changes to nginx.conf (to prevent 404 errors on the database upload)

client_max_body_size 50M;
# client_max_body_size 2M; Reset when done

I also had to make these changes to NGINX (sites-available/default) to allow WordPress to work

# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

location / {
        # try_files $uri $uri/ =404;
        try_files $uri $uri/ /index.php?q=$uri&$args;
        index index.php index.html index.htm;
        proxy
}

I had a working MySQL (I followed my guide here).

Adminer is the best PHP MySQL management utility (beats PhpMyAdmin hands down).

Restart NGINX and PHP

nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart

I had an error on database import, a non-descript error in script line 1 (error hint here).

A simple search and replace in the SQL fixed it.

Once I had increased PHP uploads to 50M and Nginx I was able to upload my database backup with Adminer  (just remember to import to the created database that matches. the wp-config.php. Also, ensure your WordPress content is in place too.

The only other problem I had was WordPress gave an “Error 500” so moved   few plugins an all was good.

Importing Old Email

I was able to use the Google G Suite tools to import my old Mail (CPanel IMAP to Google IMAP).

Import IMAP mail to GMail

I love root access on my own server now, goodbye CPanel “Usage Limit Exceeded” errors (I only had light traffic on my site).

My self-hosted WordPress is a lot snappier now, my server has plenty of space (and only costs $0.007c and hour for 1x CPU, 1GB ram, 25GB SSD storage and 1000GB data transfer quota). I use the htop command to view system processor and disk space usage.

I can now have more space for content and not be restricted by tight hosts disk quotas or slow shared servers.  I use the pydf command to view dis space.

pydf
Filesystem Size  Used

Avail

 Use%                                                    Mounted on
/dev/vda1   25G 3289M

20G

 13.1 [######..........................................] /
/www/wp-content#

I use ncdu to view folder usage.

Installing ncdu

sudo apt-get install ncdu
Reading package lists... Done
Building dependency tree
Reading state information... Done
ncdu is already the newest version (1.11-1build1).
0 upgraded, 0 newly installed, 0 to remove and 58 not upgraded.

Type ncdu in the folder you want to browse under.

ncdu

You can arrow up and down folder structures and view folder/file usage.

SSL Certificate

I am setting up a new multi year SS cert now, I will update this guide later.  I had to read my SSL guide with Digital Ocean here.

I generated some certificate on my server

cd ~/
kdir sslcsrmaster4096
cd sslcsrmaster4096/
openssl req -new -newkey rsa:4096 -nodes -keyout domain.key -out domain.csr

Sample output for  a new certificate

openssl req -new -newkey rsa:4096 -nodes -keyout dummy.key -out dummy.csr
Generating a 4096 bit RSA private key
.................................................................................................++
......++
writing new private key to 'dummy.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]: AU
State or Province Name (full name) [Some-State]: NSW
Locality Name (eg, city) []:Tamworth
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Dummy Org
Organizational Unit Name (eg, section) []: Dummy Org Dept
Common Name (e.g. server FQDN or YOUR name) []: DummyOrg
Email Address []: [email protected]

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: password
An optional company name []: DummyCO
[email protected]:~/sslcsrmaster4096# cat dummy.csr
-----BEGIN CERTIFICATE REQUEST-----
MIIFAjCCAuoCAQAwgYsxCzAJBgNVBAYTAkFVMQwwCgYDVQQIDANOU1cxETAPBgNV
BAcMCFRhbXdvcnRoMRIwEAYDVQQKDAlEdW1teSBPcmcxFzAVBgNVBAsMDkR1bW15
IE9yZyBEZXB0MREwDwYDVQQDDAhEdW1teU9yZzEbMBkGCSqGSIb3DQEJARYMbWVA
ZHVtbXkub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA6PUtWkRl
+gL0Hx354YuJ5Sul2Xh+ljILSlFoHAxktKlE+OJDJAtUtVQpo3/F2rGTJWmmtef+
shortenedoutput
swrUzpBv8hjGziPoVdd8qdAA2Gh/Y5LsehQgyXV1zGgjsi2GN4A=
-----END CERTIFICATE REQUEST-----

I then uploaded the certificate to Namecheap for an SSL cert registration.

I selected DNS C Name record as a way to verify I own my domain.

I am now waiting for Namecheap to verify my domain

End of the Google G Suite Business Trial

Before the end of the 14-day trial, you will need to add billing details to keep the email working.

At this stage, you can downgrade from a $10/m business account per user to a $5/m per user account if you wish. The only loss would be storage and google app access.

Get 20% off your first year by signing up for Google G Suite using this link: https://goo.gl/6vpuMm

Before your trial ends, add your payment details and downgrade from $10/user a month business prices to $5/iser a month individual if needed.

G Suite Troubleshooting

I was able to access new G Suite email account via gmail.com but not via Outlook 2015? I reset the password, followed the google troubleshooting guide and used the official incoming and outgoing settings but nothing worked.

troubleshooting 1

Google phone support suggested I enable less secure connection settings as Google firewall may be blocking Outlook. I know the IMAP RFC is many years old but I doubt Microsoft are talking to G Suite in a lazy manner.

Now I can view my messages and I can see one email that said I was blocked by the firewall. Google phone support and faqs don’t say why Outlook 2015 SSL based IMAP was blocked?

past email

Conclusion

Thanks to my wife who put up with my continual updates over the entire domain move. Voicing the progress helped me a lot.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.8 added ad link

Filed Under: Advice, DNS, MySQL, OS, Server, Ubuntu, VM, Vultr, Website, Wordpress Tagged With: C Name, DNS, gmail, mx, server, ubuntu, vm, VPS, Vulty

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT