• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Cloud

Setting up a Vultr VM and configuring it with Runcloud.io

July 27, 2017 by Simon

I have setup a Vultur VM manually (guide here). I decided to setup a VM software stack with Runcloud to see if it is as secure as fast (and saves me time).

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I deployed an Ubuntu server in NY for $2.5 a month on Vultr (I chose NY over Sydney as the Sydney $2.5 servers were sold out)

I then signed up for Runcloud.

I opened up port 80, 443 and 34210 as recommended by runcloud.

I then connected to my Vultr server with Runcloud.

Then Runcloud asked that I run a  script on the server as root.

Tip: Don’t run the script on a different IP like I did.

It appears I accidentally ran the Runcloud install command on the wrong IP, what did it install/change? I looked to see if Runcloud offer an uninstallation command? nope.

Snip from runcloud.

Deleting Your Server

Deleting your server from RunCloud is permanent. You can’t reconnect it back after you have deleted your server. The only way to reconnect is to format your server and run the RunCloud’s installation again.

To completely uninstall RunCloud, you must reformat your server. We don’t have any uninstallation script (yet).

No uninstall?

Time to check out RunCLoud IDE at https://manage.runcloud.io to see what it looks like.

View Server Configuration

I was able to start a  NGINX installation/web server within a few clicks (it installed to /etc/nginx–rc)

Runcloud.io Pros

  • Setup was quick.
  • Dashboard looks pretty

Runcloud.io Cons

  • My root access is no longer working (what happened)? I did notice that Fail2Ban was blocking loads of IP’s?

  • I can’t seem to edit website files with runcloud.io?
  • Limited by UI (I could create a database and database user but now set database user permissions or assign database users to databases (there is a guide but no GUI on adding a  user to a DB))
  • I have to trust that runcloud have set things up securely.
  • CPanel UI has more options that Runcloud IMHO.
  • Other free domain managers exist like https://www.phpservermonitor.org

  • RunCoud.io RTFM? Seriously (F stands for, are customers really that bad)? https://runcloud.io/rtfm/

Domain

I linked a domain to the IP, now I just need to wait before continuing.

End of Guide

I have been locked out of my runcloud managed domain, I will stick to manually setup servers.

Being locked out is good for security I guess 🙁

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.4 added tags and fixed typos.

v0.31 Domain linked

Filed Under: Cloud, Runcloud, Ubuntu Tagged With: digital ocean, MySQL, runcloud, UpCloud, vultr

How to develop software ideas

July 9, 2017 by Simon

I was recently at a public talk by Alan Jones at the UNE Smart Region Incubator where Alan talked about launching startups and developing ideas.

Alan put it quite eloquently that “With change comes opportunity” and we are all very capable of building the next best thing as technological barriers and costs are a lot lower than 5 years ago but Alan also mentioned 19 start-ups-ups fail but “if you focus on solving customer problems you have a better chance of succeeding”. Regions need to share knowledge and you can learn from other peoples mistakes.”

I was asked after this event to share thoughts on “how do I learn to develop an app” and “how do you get the knowledge”. Here is my poor “brain dump” on how to develop software ideas (It’s hard to condense 30 years experience developing software). I will revise this post over the coming weeks so check back often.

If you have never programmed before check out this programming 101 guides here.

I have blogged on technology/knowledge things in the past at www.fearby.com and recently I blogged about how to develop cloud-based services (here, here, here, here and here) but this blog post assumes you have a validated “app idea” and you want to know how to develop yourself. If you do not want to develop an app yourself you may want to speak with Blue Chilli.

Find a good mentor.


True App Development Quotes

  • Finding development information is easy, following a plan is hard.
  • Aim for progress and not perfection.
  • Learn one thing at a time (Multitasking can kill your brain).
  • Fail fast and fail early and get feedback as early as possible from customers.
  • 10 engaged customers are better than 10,000 disengaged users.

And a bit of humour before we start.

Project Mangement Lol

(click for larger image)

Here is a funny video on startup/entrepreneur life/lingo


This is a good funny, open and honest video about programming on YouTube.

Follow Seth F Samuel on twitter here.

Don’t be afraid to learn from others before you develop

My fav tips from over 200 failed startups (from https://www.cbinsights.com/blog/startup-failure-post-mortem/ )

  • Simpler websites shouldn’t take more than 2-3 months.You can always iterate and extrapolate later. Wet your feet asap
  • As products became more and more complex, the performance degrades. Speed is a feature for all web apps. You can spend hundreds of hours trying to speed of the app with little success. Benchmarking tools incorporated into the development cycle from the beginning is a good idea
  • Outsource or buy in talent if you don’t know something (e.g marketing). Time is money.
  • Make an environment where you will be productive. Working from home can be convenient, but often times will be much less productive than a separate space. Also it’s a good idea to have separate spaces so you’ll have some work/life balance.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • We most definitely committed the all-too-common sin of premature scaling. Driven by the desire to hit significant numbers to prove the road for future fundraising and encouraged by our great initial traction in the student market, we embarked on significant work developing paid marketing channels and distribution channels that we could use to demonstrate scalable customer acquisition. This all fell flat due to our lack of product/market fit in the new markets, distracted significantly from product work to fix the fit (double fail) and cost a whole bunch of our runway.
  • If you’re bootstrapping, cash flow is king. If you want to possibly build a product while your revenue is coming from other sources, you have to get those sources stable before you can focus on the product.
  • Don’t multiply big numbers. Multiply $30 times 1.000 clients times 24 months. WOW, we will be rich! Oh, silly you, you have no idea how hard it is to get 1.000 clients paying anything monthly for 24 months. Here is my advice: get your first client. Then get your first 10. Then get more and more. Until you have your first 10 clients, you have proved nothing, only that you can multiply numbers.
  • Customers pay for information, not raw data. Customers are willing to pay a lot more for information and most are not interested in data. Your service should make your customers look intelligent in front of their stakeholders. Follow up with inactive users. This is especially true when your service does not give intermediate values to your users. Our system should have been smarter about checking up on our users at various stages.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.

Here are my tips on staying on track developing apps. What is the difference between a website, app, API, web app, hybrid app and software (my blog post here)?

I have seen quite a few projects fail because:

  • The wrong technology was mandated.
  • The software was not documented (by the developers).
  • The software was shelved because new developers hated it or did not want to support it.

Project Roles (hats)

It is important to understand the roles in a project (project management methodology aside) and know when you are being a “decision maker” or a “technical developer”. A project usually has these roles.

  • Sponsor/owner (usually fund the project and have the final say).
  • Executive/Team leader/scrum master (manage day to day operations, people, tasks and resources).
  • Team members (UI, UX, Marketers, Developers (DevOps, Web, Design etc) are usually the doers.
  • Stakeholders (people who are impacted (operations, owners, Helpdesk)).
  • Subject Matter Experts (people who should guide the work and not be ignored).
  • Testers (people who test the product and give feedback).

It can be hard as a developer to switch hats in a one-person team.

How do you develop and gain knowledge?

First, document what you need to develop (what problem are you solving and what value will your idea bring). Does this solution exist already? Don’t solve a problem that already exists.

Developing software is not hard, you just need to be logical, research, be patient and follow a plan. The hardest part can be gluing components together.

I like to think of developing software like making a car if you need 4 wheels do you have 4 wheels? If you want to build it yourself and save some money can you make wheels (make rubber strips with steel reinforced/vulcanized rubber, make alloys and add bearings and have them pass regulations) or should you buy wheels (some things are cheaper to make than other things)? Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks.  Developing software can lead you down a rabbit hole of endless research, development, and testing if you don’t know what you are doing.

Examples 1:

I “need a webpage”:

  • Research: Will Wix, Shopify or a hosted WordPress website do (is it flexible or cheap enough) or do I install WordPress (guide here) or do I  learn and build an HTML website and buy a theme and modify it (and have a custom/flexible solution)?

Example 2:

I “need an iPhone and Android app”:

Research: You will need to learn iOS and Android programming and you may need a server or two to hold the apps data, webpage and API. You will also need to set up and secure the servers or choose to install a database or go with a “database as a service” like cloud.mongodb.com or google firebase.

Money can buy anything (but will it be flexible/cheap enough), time can build anything (but will it be secure enough).

Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks but developing software can lead you down a rabbit hole of endless research, development and testing if you don’t know what you are doing.

Almost all systems will need a central database to store all data, you can choose a traditional relational SQL database or a newer NoSQL database. MySQL is a good/cheap relational SQL database and MongoDB is a good NoSQL database. You will need to decide on how your app talks to the database (directly or via an API (protected by OAuth or limited access tokens)).  It is a bad idea to open a database directly to the world with no security. Sites like www.shodan.io will automatically scan the Internet looking for open databases or systems and report this as an insecure site to anyone. It is in your interest to develop secure systems in all stages of development.

CRUD (Create, Read, Update and Delete) is a common group of database tasks that you can do to prove you can read, write, update and delete from a database. While performing CRUD operations is a good to benchmark to also see how fast the database it.  if a database is the slowest link then you can use memory to cache database values (read my guide here). Caching can turn a cheap server into a faster server. Learning by doing can quickly build skills so “research”, “do” and “learn”.

Most solutions will need a website (and a web server). Here is a good article comparing Apache and Nginx (the leading open source web servers).

Stacks and Technology – There are loads of development environments (stacks), frameworks and technologies that you can choose. Frameworks supposedly make things easier and faster but frameworks and technologies change (See 2016 frameworks to learn guide and 2017 frameworks to learn guide) frequently (and can be abandoned). Frameworks supposedly make things easier and faster but be careful most frameworks run 30% slower than raw server-side and client code. I’d recommend you learn a few technologies like NGINX, NodeJS, PHP and MySQL and move up from there.

The Mean Stack is a  popular web development platform (MEAN = MongoDB, ExpressJS, Angular and NodeJS.).

Apps can be developed for Apple platforms by signing up here (about $150 AUD a year) and using the XCode IDE. Apps can be developed for the Android Platform by using Android Studio (for about $20 (one-off fee)). Microsoft has a developer portal for the Windows Platform. Google also has an online scalable database as a service called Firebase. If you look hard enough you will find a service for everything but connecting those services can be timely, costly or make security and a scalable solution impossible so beware of using as-a-service platforms. I used the Corona SDK to develop an app but abandoned the platform due to changes in the vendor’s communication and enforced policies.

If you are not sure don’t be afraid of ask for help on Twitter.

Twitter is awesome for finding experts

Recent twitter replies to a problem I had.

Learning about new Technology and Stacks

To build the knowledge you need to learn stuff, build stuff, test (benchmark), get feedback and build more stuff. I like to learn about new technology and stacks by watching Udemy courses and they have a huge list of development courses (Web Development, Mobile Apps, Programming Languages, Game Development, Databases,  Software Testing,  Software Engineering etc).

I am currently watching a Practical iOS 11 course by Stephen DeStefano on Udemy to learn about unreleased/upcoming features on the Apple iPhone (learning about XCode 9, Swift 4, What’s new in iOS 11, Drag and drop, PDF and ARKit etc).

Udemy is awesome (Udemy often have courses for $15).

If you want to learn HTML go to https://www.w3schools.com/.

https://devslopes.com/have a number or development related courses and an active community of developers in a chat system.

You can also do formal study via an education provider (e.g. Bachelor of computer sciences at UNE or Certificate IV in programming or Diploma in Software Development at TAFE).

I would recommend you use Twitter and follow keywords (hashtags) around key topics (e.g #www, #css, #sql, #nosql, #nginx, #mongodb, #ios, #apple, #android, #swift, #objectivec, #java, #kotlin) and identify users to follow. Twitter is great for picking up new information.

I follow the following developers on YouTube (TheSwiftGuy, AppleProgrammer, AwesomeTuts, LetsBuildThatApp, CodingTech etc)

Companies like https://www.civo.com/ offer developer-friendly features with hosting, https://www.pebbled.io/ offer to develop for you and https://serverpilot.io/ help you spin up software on hosting providers.

What To Develop

First, you need to break down what you need. (e.g ” I want an app for iOS and Android in 5 months that does XYZ. The app must be secure and be fast. Users must be able to register an account and update their profile”).

Choosing how high to ensure your development project scales depends on your peak expected/active concurrent users (ratio of paying and free users). You can develop your app to scale very high but this may cost more money initially, it can be bad to pay to ensure scalability early. As long as you have a good product and robust networking/retry routines and UI you don’t need to scale high early.

Once you know what you need you can search the open-source community for code that you can use. I use Alamofire for iOS network requests, SwiftyJSON for processing JSON data and other open-source software. The only downside of using open source software is it may be abandoned by the creators and break in the future. Saving your time early may cost you time later.

Then you can break down what you don’t want. (e.g “I don’t want a web app or a windows phone or windows desktop app”). From here you will have a list of what you need and what you can avoid.

You will also need to choose a project management methodology (I have blogged about this here). Having a list of action item’s and a plan and you can work through developing your app.

While you are researching it is a good idea to develop smaller fun projects to refine your skills.  There are a number of System Development Life Cycles (SDLC’s) but don’t worry if you get stuck, seek advice or move on. It is a  good idea to get users beta testing your app early and seek feedback. Apple has the TestFlight app where you can send beta versions of apps to best testers. Here is a good guide on Android beta testing.

If you are unsure about certain user interface options or features divide your beta testers and perform A/B or split testing to determine the most popular user interfaces. Capturing user data and logs can also help with debugging and user usage actions.

Practice

Develop smaller proof of concept apps in new technologies or frameworks and you will build your knowledge and uncover limitations in certain frameworks and how to move forward with confidence. It is advisable to save your source code for later use and to share with others.

I have shared quite a bit of code at https://simon.fearby.com/blog/ that I refer to from time to time. I should have shared this on GitHub but I know Google will find this if people want it.

Get as much feedback as you can on what you do and choose (don’t trust the first blog post you read (me included)).

Most companies offer Webinars on their products. I like the NGINX webinars. Tutorialspoint have courses on development topics. Sitepoint is a  good development site that offers free books, courses, and articles. What are API’s information by Programmable web.

You may want to document your application flow to better understand how the user interface works.

Useful Tools

Balsamic Mockups and Blueprint are handy for mocking up applications.

C9.io is a great web-based IDE that can connect to a VM on AWS or Digital Ocean.  I have a guide here on connecting Cloud 9 to an AWS VM here.

I use the Sublime Text 3 text editor when editing websites locally.

(image courtesy of https://www.sublimetext.com/ )

I use the Mac Paw app to help test API’s I develop locally.

(image courtesy of https://paw.cloud )

Snippets is a great application for the Mac for storing code snippets.

I use the Cornerstone Subversion app for backing up my code on my Mac.

Webservers: https://www.iis.net/IIS Webserver, NGINX Webserver, Apache Webserver.

NodeJS programming manual and tutorials.

I use Little Snitch (guide here) for simulating network down in app development.

I use the Forklift file manager on OSX.

Databases: SQL tutorials, NoSQL Tutorials, MySQL documentation.

Siege is a command-line HTTP load testing tool.

CPU Busy

http://loader.io/ is a nice web-based benchmarking tool.

Bootstrap is an essential mobile responsive framework.

Atlassian Jira is an essential project tracking tool. More on Agile Epics v Stories v Tasks on the Atlassian community website here. I have a post on developing software and staying on track here using Jira.

Jsfiddle is a good site that allows you to share code you are working on or having trouble with.

Dribbble is a “show and tell” site for designers and creatives.

Stackoverflow is the go-to place to ask for help.

Things I care about during development phases.

  • Scalability
  • Flexibility
  • Risk
  • Cost
  • Speed

Concentrating too much on one facet can risk exposing other facets. Good programmers can recommend a deliver a solution that can be strong in all areas ( I hate developing apps that are slow but secure or scalable and complex).

Platforms

You can signup for online servers like Azure, AWS (my guide here) or you can use a cheaper CPanel based hosting. Read my guide on the costs of running a cloud-based service.

Use my link to get a free Digital Ocean server for two months by using this link. Read my blog post here to help setup you VM. You can always use Ubuntu on your local machine to use Ubuntu (read my guide here). Don’t forget to use a GIT code repository like GitHub or Bitbucket.

Locally you can install Ubuntu (developers edition) and have a similar environment as cloud platforms.

Lessons Learned

  • Deploy servers close to the customers (Digital Ocean is too far away to scale in Australia).
  • Accessibility and testing (make things accessible from the start).
  • Backup regularly (Use GIT, backup your server and use Rsync to copy files to remote servers and use services like backblaze.com to backup your machine).
  • Transportability of technology (Use open technology and don’t lock yours into one platform or service).
  • Cost (expensive and convenient solutions may be costly).
  • Buy in themes and solutions (wrapbootstrap.com).
  • Do improve what you have done (make things better over time). Thing progress and not perfection.

There is no shortage of online comments bagging certain frameworks or platforms so look for trends and success stories and don’t go with the first framework you find. Try candidate frameworks and services and make up your own mind.

A good plan, violently executed now, is better than a perfect plan next week. – General George S. Patton

Costs

Sometimes cost is not the deciding factor (read my blog post on Alibaba cloud). You should estimate your apps costs per 1000 users. What do light v heavy users cost you? I have a blog post on the approx cost of cloud services.  I started researching a scalable NoSQL platform on IBM Cloudant and it was going to cost $4,000 USD a month and integrating my own App logic and security was hard. I ended up testing MongoDB Cloud where I can scale to three servers for $80 a month but for now, I am developing my current project on my own AWS server with MongoDB instance. Read my blog post here on setting up MongoDB and read my blog post on the best MongoDB GUI.

Here is a great infographic for viewing what’s involved in mobile app development.

You can choose a number of tools or technologies to achieve your goals, for me it is doing it economically, securely and in a scalable way that has predictable costs. It is quite easy to develop something that is costly, won’t scale or not secure or flexible. Don’t get locked into expensive technologies. For example, AWS has a user pays Node JS service called Lambada where you get Million of free hits a month and then you get charged $0.0000002 per request thereafter. This sounds good but I prefer fixed pricing/DIY servers better as it allows me to build my own logic into apps (this is more important than scalability).

Using open-source software of off the shelf solutions may speed things up initially? Will It slow you down later though? Ensure free solutions are complete and supported and Ensure frameworks are helping. Do you need one server or multiple servers (guide on setting up a distributed MySQL environment )? You can read about my scalability on a budget journey here. You can speed up a server in two ways Scale Up (Add more Mhz or CPU cores) or scale-out (add more servers).

Start small and use free frameworks and platforms but have a tested scale-up plan, I researched cheap Digital Ocean servers and moved to AWS to improve latency and tested MongoDB on Digital Ocean and AWS but have a plan to scale up to cloud.mongodb.com if need be.

Outsource (contractors) 

Remember outsourcing work tasks (or complete outsourcing of development) can buy you time and or deliver software faster. Outsourcing can also introduce risks and be expensive. Ask for examples of previous work and get raw numbers on costs (now and in the future) and concurrent users that a particular bit of outsourcing work will achieve.

If you are looking to outsource work do look at work that the person or company has done before (if is fast, compliant, mobile scalable, secure, robust, backup up, do you have rights to edit/own and own the IP etc). I’d be cautious of companies who say they can do everything and don’t show live demos.

Also, beware of restrictions on your code set by the contractors. Can they do everything you need (compare with your list of Moscow must haves)? Sometimes contractors only code or do what they are comfortable with that can impact your deliverables.

Do use a private Git repository (that you own) like GitHub or BitBucket to secure your code and use software like Trello or Atlassian JIRA to track your project. Insist the contractors use your repository to retain control.

You can always sell equity in your idea to an investor and get feedback/development from companies like Bluechilli.

Monetization and data

Do have multiple monetization streams (initial app purchase cost, in-app purchase, subscriptions, in-app credit, advertising, selling code/components etc). Monthly revenue over yearly subscription works best to ensure cash flow.

Capture usage data and determine trends around successful engagement, Improve what works. Use A/B testing to roll out new features.

I like Backblaze post on getting your first 1,000 customers.

Maintenance, support risk and benefits

Building your own service can be cheaper but also riskier if you fail to secure an app you are in trouble if you cannot scale you are in trouble. If you don’t update your server when vulnerabilities come out you are in trouble. Also, Google on monetization strategies. Apple apps do appear to deliver more profits over Android. Developers often joke “Apple devices offer 90% of the profits and 10% of the problems and Android apps offer 90% of the problems and 10% of the profits”.

Also, Apple users tend to update to the latest operating system sooner where Android devices are rather fragmented.

Do inform you users with self-service status pages and informative error messages and don’t annoy users.

Use Free Trials and Credit

Most vendors have free trials so use them

https://aws.amazon.com/free/AWS have 12 month free tiers.

Use this link to get two months free with Digital Ocean.

Microsoft Azure also give away free credit.

Google cloud also have free credit.

Don’t be afraid to ask.

MongoDB Cloud also gives away free credit if you ask.

Security

Sites like Shodan.io will quickly reveal weaknesses in your server (and services), this will help you build robust solutions from the start before hackers find them. Read https://www.owasp.org/index.php/Main_Page to know h0w to develop secure websites. Listen to the SecurityNow podcast to learn how the technology works and is broken. Following TroyHunt is recommended to keep up to date with security in general. @0xDUDE is a good ethical hacker to follow to stay up-to date on security exploits also @GDI_FDN is a good non-profit organization that helps defend sites that use open source software.

White hack hackers exist but so do black hat ones.

Read the Open Web Application Security site here. Read my guide on setting up public key pinning in security certificates here.

I use the ASafaWeb site to test your sites from common ASP security flaws. If you have a secure certificate on your site you will need to ensure the certificate is secure and up to date with the SSL Labs SSL Test site.

SSL Cert

Once your websites IP address is known (get it from SSL Labs) run a scan over your site with https://www.shodan.io/ to find open ports or security weaknesses.

Shodan.io allows you and others to see public information about your server and services. You can read about well-known internet ports here.

Anyone can find your server if you are running older (or current) web servers and or services.

It is a  good idea to follow security researchers like Steve Gibson and Troy Hunt and stay up to date with live exploits. http://blog.talosintelligence.com is also a good site for reading technical breakdowns of exploits.

Networking

Do share and talk about what you do with other developers. You can learn a lot from other developers and this can save you loads of time and mistakes. True developers love talking about their code and solutions.

Decision Making

Quite a lot of time can be spent on deciding on what technology or platform to use, I decide by factoring in cost, risk and security over flexibility, support and scalability. If I need flexibility, lower support or scalability then I’ll choose a different technology/platform. Generally, technology can help with support. Scalable solutions need effort from start to finish (it is quite easy to slow down any technology or service).

Don’t be afraid to admit you have chosen the wrong technology or platform. It is far easier to research and move on than live with poor technology.

If you have chosen the wrong technology and stick with it, you (and others) will loath working with it (impacting productivity/velocity).  Do you spend time swapping technology or platforms now or be less productive later?

Intellectual property and Trademarks

Ensure you search international trademarks for your app terms before you start using them. The Australian ATO has a good Australian business name checker here.

https://namechk.com/ is also a good place to search for your app ideas name before you buy or register any social media accounts.

Using https://namechk.com/ you can see “mystartupidea” name is mostly free.

And the name “microsoft’ is mostly taken.

Seek advice from a start-up experts from https://www.bluechilli.com/ like Alan Jones.

See my guide on how to get useful feedback for your ideas here.

Tips

  1. Use Git Source Control systems like GitHub or Bitbucket from the start and offsite backup your server and environments frequently. Digital Ocean charges 20% of your servers costs to back it up. AWS has multiple backup offerings.
  2. Start small and scale up when needed.
  3. Do lots of research and test different platforms, frameworks, and technologies and you will know what you should choose to develop with.

(Image above found at http://startupquotes.startupvitamins.com/ Follow Startup Vitamins on Twitter here.).

You will know when you are a developer when you have gained knowledge and experience and can automatically avoid technologies that will not fit a  solution.

Share

Don’t be afraid to share what you know (read my blog post on this here). Sharing allows you to solidify your knowledge and get new information. Shane Bishop from EWWW Image Optimizer  WordPress plugin wrote Setting up a fast distributed MySQL environment with SSL for us. If you have something to share on here please let me know here on twitter.

It’s never too late to do

One final tip is knowledge is not everything, planning and research is key, a mind that can’t develop may be better than a mind that can because they have no experience (or baggage) and may find faster ways to do things. Thanks to http://zachvo.com/ for teaching me this during a recent WordPress re-deployment. Sometimes the simplest solution is.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

DRAFT: 1.86 added short link

Short: https://fearby.com/go2/develop/

Filed Under: Advice, Android, Apple, Atlassian, Backup, BitBucket, Blog, Business, Cloud, CoronaLabs, Cost, Development, Domain, Firewall, Free, Git, GitHub, Hosting, JIRA, mobile app, MySQL, Networking, NodeJS, OS, Project Management, Scalability, Scalable, Security, Server, Software, Status, Trello, VM Tagged With: ideas

How to upgrade an AWS free tier EC2 t2.micro instance to an on demand t2.medium server

July 9, 2017 by Simon

Amazon Web Services have a great free tier where you can develop with very little cost. The free tier Linux server is a t2.micro server (1 CPU, low to moderate IO, 1GB memory with 750 hours or CPU usage). The free tier limits are listed here. Before you upgrade can you optimize or cache content to limit usage?

When you are ready to upgrade resources you can use this cost calculator to set your region (I am in the Sydney region) and estimate your new costs.

awsupgrade001

You can also check out the ec2instances.info website for regional prices.

Current Server Usage

I have an NGINX Server with a NodeJS back powering an API that talks to a local MySQL database and Remote MongoDB Cluster ( on AWS via http://www.mongodb.com/cloud/ ). The MongoDB cluster was going to cost about $120 a month (too high for testing an app before launch).  The Free tier AWS instance is running below the 20% usage limit so this costs nothing (on the AWS free tier).

You can monitor your instance usage and credit in the Amazon Management Console, keep an eye on the CPU usage and CPU Credit and CPU Credit Balance.  If the CPU usage grows and balance drops you may need to upgrade your server to prevent usage charges.

AWS Upgrade

I prefer the optional htop command in Ubuntu to keep track of memory and processes usage.

Basic information from AWS t1.micro (idle)

AWS Upgrade

Older htop screenshot of a dual CPU VM being stressed.

CPU Busy

Future Requirements

I plan on moving a non-cluster MongoDB database onto an upgraded AWS free tier instance and develop and test there and when and if a scalable cluster is needed I can then move the database back to http://www.mongodb.com/cloud/.

First I will need to upgrade my Free tier EC2 instance to be able to install MongoDB 3.2 and power more local processing. Upgrading will also give me more processing power to run local scripts (instead of hosting them in Singapore on Digital Ocean).

How to upgrade a t2.micro to t2.medium

The t2.medium server has 2 CPU’s, 4 GB of memory and Low to Moderate IO.

  1. Backup your server in AWS (and manual backup).
  2. Shutdown the server with the command sudo shutdown now
  3. Login to your AWS Management Console and click Instances (note: Your instance will say it is running but it has shut down (test with SSH and connection will be refused)).
  4. Click Actions, Image then Create Image
  5. Name the image (select any volumes you have created) and click Create Image.
  6. You can follow the progress o the snapshot creation under the Snapshots menu.AWS Upgrade
  7. When the volumes have been snapshoted you can stop the instance.
  8. Now you can change the instance type by right clicking on the stopped instance and selecting Instance Settings then Change Instance Type  AWS Upgrade
  9. Choose the new desired instance type and click apply (double check your cost estimates) AWS Upgrade
  10. Your can now start your instance.
  11. After a few minutes, you can log in to your server and confirm the cores/memory have increased.AWS Upgrade
  12. Note: Your servers CPU credits will be reset and your server will start with lower CPU credits. Depending on your server you will receive credits every hour up to your maximum cap.AWS Upgrade
  13. Note: Don’t forget to configure your software to use more RAM or Cores if required.

Misc

Confirm processors via the command line:

grep ^processor /proc/cpuinfo | wc -l 
>2

My NGINX configuration changes.

worker_processes 2;
worker_cpu_affinity auto;
...
events {
	worker_connections 4096;
	...
}
...

Find the open file limit of your system (for NGINX worker_connections maximum)

ulimit -n

That’s all for now.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.0 DRAFT Initial Post

Filed Under: Advice, AWS, Cloud, Computer, Server, Ubuntu, VM Tagged With: AWS, instance, upgrade

Costs of running a cloud based service

June 12, 2017 by Simon

Below is a rough guide to the costs of maintaining (not setting up) a cloud-based service. Service may be a new database and web pages or API for a new web app or mobile app, also this guide does not estimate the development costs or medium to high usage of services.

Related Guide: Application scalability on a budget (my journey).

I tried to review Alibaba Cloud but could not spin up a VM.

Setting up a cloud server or services

CPanel Hosting

The cheapest and simplest way to get started is with a CPanel based Hosting plan with an existing website hosting company. CPanel is a Hosting control panel software package that provides interfaces for various functions on a server line create databases, create and edit websites and configure server settings. Normally a website host provides a CPanel based plan for as low as $5 a month. CPanel is a great way to get started creating MySQL databases and create PHP based websites.

CPanel Summary
CPanel Summary

But beware website hosts will often bundle many CPanel users onto one server and limit the capacity of your website to prevent your site hogging the resources of other users on your website’s server. Worst case what you release on a CPanel hosted website will be limited and you may receive “your website has reached its quota” message.

cpenal_usage_exceeded

Be prepared to move your content away from a CPanel based host if you hit the wall.

But CPanel based website hosts are usually responsible for the servers performance and any errors you may receive for any database or security certificate so you usually don’t have to do much apart from contact support when things go wrong.

If you are not technically minded CPanel is great.

Summary: A CPanel domain may cost $6+^ a month AUD

fyi: SSL on CPanel

One downside of using a CPanel based website host is you may be charged through the teeth for an SSL certificate (they have to make money somewhere I guess). I was charged $150 a year for a poorly performing SSL certificate. If tour CPanl hosts charge less than $50 for a good certificate stick with them.

Bad CPanel SSL Certificate
Bad CPanel SSL Certificate

Summary: A SS certificate on CPanel may cost you $150 AUD a year (it’s usually not a good certificate either).

Digital Ocean

Digital Ocean allows you to buy a server (Droplet) online (that you manage) where you install the same software as a CPanel host (MySQL, Web Server, SSL etc). Digital Ocean servers can be purchased for as low as $5 USD a month.  Digital Ocean allows us to deploy a server (Droplet) and instal what you wish and scale-up that servers CPU or RAM if need be in minutes.

I have a few guides on using Digital Ocean servers.

Read my guide on how to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.

Read my guide on creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin.

Read my guide on creating and configuring a CentOS server on Digital Ocean.

fyi Beyond SSL with Content Security Policy, Public Key Pinning etc.

Use this link to signup up to Digital Ocean and get 2 months free servers n Digital Ocean.

Summary: A server on Digital Ocean will cost you from $5 USD a month.

AWS

I would have stayed with Digital Ocean but they do not allow servers (Droplets) to be deployed to Sydney so I could not get servers to scale concurrent users very well. Read my application scalability on a budget (my journey) blog post.

AWS allow you to deploy similar servers to Sydney and have up to 12 months free credits. An AWS free trial will give you 1 Million AWS Lambada requests per month, 5GB Amazon S3 storage, 750 hours of CPU (RDS/EC2) credits etc.

After the free trial period (or when you need more credit) you can upgrade your AWS instance and capacity.  An Amazon EC2 T2.Medium server with 2 vCPU’s and 4GB RAM running Ubuntu is my perfect server, this will cost about $53 a month.  This one server is more than enough to run multiple applications and get me, approx. 2,000 concurrent users.

Read my guide on creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin.

fyi Beyond SSL with Content Security Policy, Public Key Pinning etc.

Summary: A good server (2x CPU and 4GB RAM) server on AWS may cost you from $53 USD a month.

Alibaba Cloud

I tried to setup a cloud VM on Alibaba Cloud but was unsuccessful. I’d recommend people use AWS if money and speed is no object, Digitial Ocean if you are not in Australia and are looking to save some dollars and CPanel Hosting or Alibaba next. Alibaba promised a similar low end to what Digital Ocean offered but twice the storage for $30 a year (half the price of Digital Ocean).

Tin box Hosting

When a virtual server is not fast enough you can always buy a dedicated server (Tin Box) from say Rackspace. Rackspace are leading providers in tin (physical) server hosting busy cost over $449 a month for an entry-level server in Australia.

Rackspace Cost Calculator. I asked the Rackspace Chat for a URL for a pricing calculator and was told one does not exist (I found one and a server in Sydney with 3TB bandwidth and 10GB CDN bandwidth would cost me $3k+ minimum a month).

Summary: If you can consider Rackspace hosting you don’t need to ask for a price.

aaS (as a service)

One alternative is to host the application on the cloud (AWS Lambada) and pay per user hits and not manage servers at all. This can lower the bills per month but could cost you more in the long term (if the app usage is high you will pay). You may find aaS too restrictive. I prefer to manage things myself over aaS.

Summary: I have not considered deploying apps on aaS hosts because I prefer to pay for a server and not for the usage.  aaS hosting seems too risky in the long term.
Other Hosts

Microsoft also offers virtual servers via it’s Azure platform.  They have free credit too.

Most server providers offer free trials and free periods (digital ocean 2 months free, AWS 12 months free, Azure free credit). Try each and see what works best with your application as each will perform and cost differently.

Support and Risk

  • CPanel = Lower risk for you but maybe limiting and may not take you all the way.
  • Digital Ocean is in my opinion are the perfect hosts but are not located in all regions and this ruins scalability.  Digital Ocean has great FAQ’s and an active community.
  • Microsoft Azure are professional but I found it’s firewall and documentation features lacking.
  • AWS are professional hosts but are complex and a bit pricey and most of the time you handle the support.
  • Alibaba support is lacking (twitter responses took 5 days and official tickets were a waste of time).

At the end of the day you may want to get someone else to handle the risk and you handle the app or code.

Fast web servers

Of course, you want the fastest webserver to put on a virtual machine. For me I chose Nginx over Apache. Here is a good article comparing Apache and Nginx (the leading open source web servers).

Further proofing and technology longevity

Ensure you use platforms and technologies and technologies that will be around for the long haul. I was going to use parse and storm path but they are no more. Sometimes developing your own thing increases and lowers the risk

Scalability and Future Proofing

All providers may be able to handle your application traffic and if it cannot be prepared to move hosts/technology.

Start small and move up. DigitalOocean, Azure and AWS are easy to upgrade servers or add servers.

Distribute services across servers where possible setup servers close to your customers (as I found out). Sometimes scaling out (adding extra servers) is cheaper than scaling up (adding more speed or CPU credits). Trail and error is the key.

Server Locations.

As your service grows you may need to setup domains and servers in multiple regions and this will multiply the costs so choose your domain name wisely.

Content Caching

You may want to cache some of your services content on remote servers (Digital Ocean, AWS or Cachefly CDN’s), this depends on costs.

You may also want to pay for remote mail providers rather than setting up a mail server.

Status page services

fyi: Read my guide on creating a service status page.

Costing Examples

Basic WordPress or website ($5 a month)

  • 1x CPanel Website/Hosting Plan.
  • 1x MySQL Database (CPanel).
  • 1x SSL Certificate (CPanel).

Small traffic Website and API Backend ($5 a month)

  • 1x  Digital Ocean Server (1x vCPU, 20GB SS, 2G RAM and 1Gb Transfer)
  • 1x MySQL Database (Digital Ocean).
  • 1x NGINX Web server and NodeJS.
  • 1x SSL Certificate (Self Installed via Namecheap).

Medium traffic Website and API Backend ($20 a month)

  • 1x  Digital Ocean Server (2x vCPU, 40Gb SS, 4G RAM, add 3TB Transfer)
  • 1x MySQL Database (Digital Ocean).
  • 1x NGINX Web server and NodeJS.
  • 1x SSL Certificate (Self Installed via Namecheap).

Large traffic Website and API Backend ($20 a month)

  • 1x  Digital Ocean Server (4x vCPU, 80G  SSD, 8G RAM and 5TB Transfer)
  • 1x MySQL Database (Digital Ocean).
  • 1x MongoDB Database (Digital Ocean).
  • 1x NGINX Web server and NodeJS.
  • 1x SSL Certificate (Self Installed via Namecheap).

Very large traffic Website and API Backend ($155 a month)

  • 1x Digital Ocean Server (1x vCPU, 20GB SS, 2G RAM and 1Gb Transfer) $5/m
  • 1x AWS  Server (4x vCPU, 80G  SSD, 8G RAM and 5TB Transfer)  $70/m
  • 1x MySQL Database (Digital Ocean) – $5 a month
  • 1x MongoDB Database Cluster (MondoGB Atlas) – $80 a month.
  • 1x NGINX Web server and NodeJS Farm
  • 1x SSL Certificate (Self Installed via Namecheap).

Very very large traffic Website and API Backend with cache distribution via CDN ($205 a month)

  • 1x Digital Ocean Server (1x vCPU, 20GB SS, 2G RAM and 1Gb Transfer) $5/m
  • 1x AWS  Server (4x vCPU, 80G  SSD, 8G RAM and 5TB Transfer)  $70/m
  • 1x CDN Distribution Service (CacheFly) – $50 a month
  • 1x MySQL Database (Digital Ocean) – $5 a month
  • 1x MongoDB Database Cluster (MondoGB Atlas) – $80 a month.
  • 1x NGINX Web server and NodeJS Farm
  • 1x SSL Certificate (Self Installed via Namecheap).

Caching

Caching content is a good idea to limit hits to your Web server, good guide here.

Security

Cheap may not be good (hosting or DIY), do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Summary

Start small and monitor the server resources and scale up as required. Setup servers close to your customers. Choose Ubuntu OS over Windows.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.61 added info on Apache and nginx

Filed Under: Cloud, Cost Tagged With: cloud based, cost, service

Digital disruption or digital tinkering

December 20, 2016 by Simon

The biggest buzzwords used by prime ministers, presidents or management these days has been “Innovation” and “digital disruption”.  As a developer or manager do you understand what goes into a new digital customer-focused service like an API or data-driven portal? How well is your business or products doing in the age of innovation and digital disruption? Do you listen to what your customers want or need?

When to Pivot

There comes a time when businesses realize they need to pivot in order to stay viable.

  • People don’t rent VHS movies they download movies from the Internet
  • Printing photographs, who does that anymore?
  • People learn from videos on YouTube, Khan Academy for free or pay for courses from, Pluralsight.com, Coursea.org, Udemy.com of Linda.com.
  • Information is wanted 24/7 and a call to customer service if information cannot be sourced online.

kodak-bankruptcy

Image source.

Chances are 90% of your customers are using mobile or tablet devices on any given day.  If you are not interacting with your customers via personalized/mobile technology prepare to be overtaken as a business.

Pivoting may require you to admit you are behind the eight ball and take a risk and set up a new customer-focused web portal, API, app or services.  Make sure you know what you need before the lure of services, buzzwords and  “shiny object syndrome” from innovation blog posts and consultants take hold.

Advocates and blockers of change

Creating change in an organization that bean counts every dollar, exterminates all risk and ignore ideas is a hard sell. How do you get support from this with power and endless rolls of red tape?

Bad reasons for saying no to innovation:

  • You can’t create a mobile app to help customers because the use of our logo won’t be approved.
  • Don’t focus on customer-focused automation, analytics and innovation because internal manual processes need attention first.
  • Possible changes in 2 years outside of our control will possibly impact anything we create.
  • Third eyelids.

Management’s support of experimentation and change is key to innovation. HARVARD BUSINESS REVIEW have a great post on this: The Why, What, and How of Management Innovation.

Does your organization value innovation? This is possibly the best video that describes how the best businesses focus on innovation and take risks. Simon Sinek: How great leaders inspire action.

Here is a great post on How To Identify The Most Dangerous Person In Your Company who blocks innovation and change.

Also a few videos on getting staff on board and motivation and productivity.

Project Perspectives
consultant_001

Project Focus

  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.

I said that three times (because it is important).

Before you begin coding, learn from those who have failed

Here are some of the best tips I have collected from start-ups who have failed.

  • We didn’t spend enough time talking with customers and we’re rolling out features that I thought were great, but we didn’t gather enough input from clients. We didn’t realize it until it was too late. It’s easy to get tricked into thinking your thing is cool. You have to pay attention to your customers and adapt to their needs.
  • The cloud is great. Outsourcing is great. Unreliable services aren’t. The bottom line is that no one cares about your data more than you do – there is no replacement for a robust due diligence process and robust thought about avoiding reliance on any one vendor.
  • Your heart doesn’t get satisfied with any levels of development. Ignore your heart. Listen to your brain.
  • You can always iterate and extrapolate later. Wet your feet asap.
  • As the product became more and more complex, the performance degraded. In my mind, speed is a feature for all web apps so this was unacceptable, especially since it was used to run live, public websites. We spent hundreds of hours trying to speed up the app with little success. This taught me that we needed to having benchmarking tools incorporated into the development cycle from the beginning due to the nature of our product.
  • It’s not about good ideas or bad ideas: it’s about ideas that make people talk. Make some aspect of your product easy and fun to talk about, and make it unique.
  • We really didn’t test the initial product enough. The team pulled the trigger on its initial launches without a significant beta period and without spending a lot of time running QA, scenario testing, task-based testing and the like. When v1.0 launched, glitches and bugs quickly began rearing their head (as they always do), making for delays and laggy user experiences aplenty — something we even mentioned in our early coverage.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • Our biggest self-realization was that we were not users of our own product. We didn’t obsess over it and we didn’t love it. We loved the idea of it. That hurt.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.
  • It may seem surprising that a seemingly successful product could fail, but it happens all the time. Although we arguably found product/market fit, we couldn’t quite crack the business side of things. Building any business is hard, but building a business with a single app offering and half of your runway is especially hard.

Buzzwords

The Innovation landscape is full of buzzwords, here are just a few you will need to know.

  • API – Application Program Interface is a method that uses web address ( http://www.server.com/api/important/action/) to accept requests and deliver results. Learn more about API’s here http://www.programmableweb.com/category/all/apis
  • AR – Augmented reality is where you use a screen on a mobile, tablet or PC to overlay 3D or geospatial information.
  • Big Data – Is about taking a wider view of your business data to find insights and to predict and improve products and services.
  • BYOD – Bring your own device.
  • BYOC – Bring your own cloud.
  • Caching – Using software to deliver data from memory rather than from slower database each time.
  • Cloud – Someone else’s computer that you run software or services on.
  • CouchDB – An Apache designed Key/Value NoSQL JSON store database that focuses on eventual replication.
  • DaaS – Desktop as a service
  • DbaaS – Database as a service (hardware and database software maintained by others but your data).
  • DBMS – Database Management System – the GUI
  • HPC – High-Performance Computing.
  • IaaS – Cloud-based Servers and infrastructure (Google Cloud, Amazon AWS, Digital Ocean and Vultr and Rackspace).
  • IDaaS – Third Party Authorisation management
  • IOPS – Operations per Second –What limitations are on the interface or software in question.
  • IoT – Internet of things are small devices that can display, sense or update information (internet-connected fridge or a button that orders more toilet paper.
  • iPaaS– integration Platform as a Service (software to integrate multiple XaaS)
  • JSON – A better CSV file (read more here)
  • MaaS – Monitoring as a Service (e.g Keymetrics.io)
  • CaaS – Communication as a service (e.g http://www.twillio.com)
  • Micro-services – an existing service that is managed by another vendor (e.g Notifications, login management, email or storage), usually charged by usage.
  • MongoDB – Another Key/Value NoSQL JSON Database that has upfront Replication
  • NoSQL – A No SQL database that stores data in JSON documents instead of normalised related tables.
  • PaaS – A larger stack of SaaS that you can customise from vendors Azure (Active Directory, Compute, Storage Blobs etc), AWS (SQS, RDS, Alasticache, Elastic File System, ), Google Cloud (Compute Engine, App Engine, Data-store ), Rackspace etc.
  • Rate Limiting – Ability to track and limit a user’s request to an API.
  • SaaS – A smaller software component that you can use or integrate (Google Apps, CiscoWebEx, GoTo
  • Scalable – the ability to have a website or service handle thousands to millions of hits and have baked in a way to handle exponential growth.
  • Meeting).
  • Scale Up – Increase the CPU speed and thus workload
  • Scale Out – Adding more servers and distributing the load instead of making servers faster.
  • SQL – A traditional relational database query language.
  • VR – Virtual Reality is where you totally immerse yourself in a 3D world with a head-mounted display.
  • XaaS – Anything as a service.

External or Online Advice

A consultant once joked to our team that their main job was to “Con” and “Insult” you ( CONinSULTant ).  Their main job is to promote what they know/sell and sow seeds of doubt about what you do. Having said that please take my advice with a grain of salt (I am just relaying what I know/prefer).

Consultants need to rapidly convert you to their way of thinking (and services), consultants gloss over what they don’t know and leave you down a happy part solution nirvana (often ignoring your legacy apps or processes, any roadblocks are relished as an opportunity for more money-making).  This is great if you have endless buckets of money and want to rewrite things over and over.

Having consultants design as develop a solution is not all bad but that would make his developer-focused blog post boring.

Microsoft IIS, Apache, NGINX, Lighthttpd are all good web servers but each has a different memory footprint, performance, and features when delivering static v dynamic content and each platform has maximum concurrent users that they can handle a second for a given server configuration.

You don’t need expensive solutions, read this blog post on “How I built an app with 500,000 users in 5 days on a $100 server”

Snip: I assume my apps will be successful. There’s no point in building an app assuming it won’t be successful. I would not be able to sleep if my app gains traction and then dies due to bad tech. I bake minimum viable scalability principles into my app. It’s the difference between happiness and total panic. It’s what I think should be part of an app MVP (Minimum Viable Product).

Blind googling to find the best platform can be misleading as it is hard to compare apples to apples. Take your time and write some code and evaluate for yourself.

  • This guide highly recommends Microsft.NET and IIS Web servers:  https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/
  • This guide says G-WAN, NGINX and Apache are good http://gwan.com/benchmark

Once you start worrying about scalability you start to plan for multiple servers, load balancing, replication and caching be prepared to open your wallet.

I prefer the free NGINX and if I need more grunt down the track I can move to the NGINX Plus as it has loads of advanced scalability and caching options  https://www.nginx.com/products/.
Alternatively, you can use XaaS for everything and have other people worry about the uptime/scaling and data storage but I find that it is inevitable you will need the flexibility of a self-managed server and FULL control of the core processes.

Golden rule = prove it is cheaper/faster/more reliable and don’t just trust someone. 

Common PaaS, SaaS and Self-Managed Server Vendors

Amazon AWS and Azure are the go to cloud vendors who offer robust and flexible offerings.

Azure: https://azure.microsoft.com/en-us/

Amazon AWS: https://aws.amazon.com/

Google cloud has many cloud offerings but product selection is hard. Prices are high and Google tend to kill off products that don’t make money (e.g Google Gears etc).

Google Cloud:

https://cloud.google.com/

Simple Self Managed Servers

If you want a server in the cloud on the cheap Linode and Digital Ocean have you covered.

  • Digital Ocean: http://www.digitalocean.com
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Linode: https://www.linode.com/

High-End Corporate vendors

  • Rackspace: https://www.rackspace.com/en-au/cloud
  • IBM Cloud: http://www.ibm.com/cloud-computing/au/#infrastructure

Other vendors

  • Engineyard: http://www.engineyard.com/
  • Heroku: https://www.heroku.com/
  • Cloud66: http://www.cloud66.com/
  • Parse: DEAD

Moving to Cloud Pro’s

  • Lowers Risk
  • Outsource talent
  • Scale to millions of users/hits
  • Pay for what you use
  • Granular access
  • Potential savings *
  • Lower risk *

Moving to Cloud Con’s

  • Usually billed in USD
  • Limited upload/downloads or API hits a day
  • Intentional tier pain points (Limited storage, hits, CPU, data transfers, Minimum servers).
  • Cheaper multi-tenant servers v expensive dedicated servers with dedicated support
  • Limited IOPS (g 30 API hits a second then $100 per additional 10 Req/sec)
  • XaaS Price changes
  • Not fully integrated (still need code)
  • Latency between Services.
  • Limited access for developers (not granular enough).
  • Security

Vendors can change their prices whenever they want, I had a cluster of MongoDB servers running on AWS (via http://www.mongodb.com/cloud/ ) and one day they said they needed to increase their prices because they underestimated the costs for the AWS servers. They gave me some credit but I was instantly paying more and was also tied to USD (not AUD). A fall in the Australian dollar will impact bills in a big way.

Vendor Uptime:

Not all vendors are stable, do your research on who are the most reliable: https://cloudharmony.com/status

Quick Status Pages of leading vendors.

  • AWS: https://status.aws.amazon.com/
  • Azure: https://azure.microsoft.com/en-us/status/
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Digital Ocean: https://status.digitalocean.com/
  • Google: https://status.cloud.google.com/
  • Heroku: https://status.heroku.com/
  • LiNode: https://status.linode.com/
  • Cloud66: http://status.cloud66.com/

Some vendors have patchy uptime

consultant_003

Management Software and Support:

Don’t lock in a vendor(s) until you have tested their services and management interfaces and can accurately forecast your future app costs.

I found that Digital Ocean was the simplest to get started, had capped prices and had the best documentation. However, Digital Ocean do not sell advanced services or advanced support and they did not have servers in Australia.

Google Cloud left a lot to be desired with product selection, setup and documentation. It did not take me long to realize I would be paying a lot more on Google platforms.

Azure was quite clean and crisp but lacked controls I was looking for. Azure is designed to be simple with a professional appearance (I found the default security was not high enough for me unmanaged Ubuntu Servers).  Azure was 4x the cost of Digital Ocean servers and 2x the cost of AWS.

AWS management interfaces were very confusing at first but support was not far away online.  AWS seemed to have the most accurate cost estimators and developer tools to make it my default choice.

Free Trials

When searching for a cloud provider to test look for free trials and have a play before you decide what is best.

https://aws.amazon.com/free/ – 12 Month free trial.

https://azure.microsoft.com/en-us/free/ –  $200 credit.

Digital Ocean 2 moths free for new customers.

Cloudant offered $50 free a month for a single multi-tenant NoSQL database but after as IBM acquisition, the costs seem steep (Financing is available through so it must be expensive). I walked away from IBM because it was going to cost me $4,000 a month for 1 dedicated Cloudant CouchDB Node.

Costs

It is hard to forecast your costs if you do not know what components you will use, what the CPU activity will be and what data will be delivered.

Google and AWS have a confusing mix of base rates, CPU credits, and data costs. You can boost your credits and usage but it will cost you compared to a flat rate server cost.

Digital Ocean and Linode offer great low rates for unmanaged servers and reasonable extra charges other vendors will scalp from the get go but lack the global presence.

Azure is a tad more expensive than AWS and a lot higher than Digital Ocean

At some point you need to spin up some servers and play around and if you need to change to another vendor.  I was tempted by IBM Cloud Ant CouchDB DBaaS but it would have been $4000 USD a month. (it did come with 24/7 techs that monitored the service for me).

Databases

Relational databases like MySQL and SQL Server are solid choices but replication can be tricky. See my guide here.

  • NoSQL database are easier to scale up and out but more care has to be given to the software controlling the data and collisions, Relational databases are harder to scale but are by designed to enforce referential integrity.

Design what you need and then chose a Relational, NoSQL or Mix of databases.  A good API will join a mix of databases but deliver the best of both worlds.

E.g Geographic data may best be served from MongoDB but related customer data from MySQL or MS SQL Server

Database cost will also impact your database decisions. E.g Why set up a SQL Server when a MySQL will do, why set up a Mongo DB cluster when a single MongoDB instance will do.

Also when you scale out the database capabilities vary.

  • Availability – Each client can always read and write data.
  • Consistency – All clients have the same view of the data
  • Partition Tolerance – The System works well despite physical network partitions.

nosql-triangle

Database decisions will impact the code and complexity of your application.

Website and API Endpoint

The website will be the glue that sticks all the pieces together.  An API on a web server ( e.g  https://www.myserver.com/api/v1/do/domething/important ) may trigger these actions.

  1. Check the request origin (Ip ban) – Check IP cache or request new IP lookup
  2. Validate SSL status.
  3. Check the users login tokens (are they logged in) – log output
  4. Check a database (MYSQL)
  5. Check for permissions – is this action allowed to happen?
  6. Check for rate-limiting – had a threshold been exceeded.
  7. Check another database (MongoDB)
  8. Prepare data
  9. Resolve the API request – return the data.

A Web server then becomes very important as it is managing lot. If you decided to use a remote “as a service”  ID management API or application endpoint would each of the steps happen in a reasonable time-frame.  StormPath may be great service for IP auth but I had issues with reliability early on and costs were unpredictable, Google firebase is great at Application endpoints but they can be expensive.

Carefully evaluate the pro’s and cons of going DIY/self-managed versus a mix of “as a service” and full “as a service”.

I find that NGINX and NodeJS is the perfect balance between cost, flexibility, and scalability and risk [ link to my scalability guide ] NodeJS is great for integrating MySQL, API or MongoDB calls into the back end in a non-blocking way.  NodeJS can easily integrate caching and connection pooling to enhance throughput.

Mulesoft is a good (but expensive) API development suite https://www.mulesoft.com/platform/api

Location, Latency and Local Networks.

You will want to try and keep all of your servers and services as close as possible, don’t spin up a digital ocean server in Singapore if your customers are in Australia (the NETFLIX effect will see latency drop off a cliff at night). Also having a database on one vendor and a web server on another vendor may add extra latency issues, try and use the same vendor in the same data centre.

Don’t forget SSL will add about 40ms to any request locally (and up to 200ms for overseas serves), that does impact maximum concurrent users (but you need strong SSL).

  • Application scalability on a budget (my journey)
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
  • The quickest way to setup a scalable development ide and web server
  • More here: https://fearby.com/

Also, remember the servers may have performance limitations (maximum IOPS ) sometimes you need to pay for higher IOPS or performance to get better throughput.

Security

Ensure that everything is secure, logged and you have some sort of IP banning or rate-limiting and session tokens/expiry and or auto log out.

Your servers need to be patched and potential exploits monitored, don’t delay updating software like MySQL and OpenSSL when exploits are known.

Consider getting advice from a company like https://www.whitehack.com.au/ where they can review your code and perform penetration testing.

  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Update OpenSSL on a Digital Ocean VM
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

You may want to limit the work you do on authorization management and get a third party to do https://www.okta.com/ or http://www.stormpath.com can help here.

You will certainly need to implement two-factor authentication,  OAuth 2, session tokens, forward security, rate limiting, IP logging, polymorphic data return via API.  Security is a big one.

Here is a benchmark for an API hit overseas with and without SSL

consultant_002-1

Moving my Digital Ocean Server from Singapore to AWS in Australia dropped my API requests to under 200ms (SSL, complete authorization, logging and payload delivery).

Monitoring and Benchmarking

Monitoring your website’s health (CPU, RAM and Memory) along with software and database monitoring is very important to maintain a service.

https://keymetrics.io/ is a great NodeJS service and API monitoring application.

consultant_004

PM2 is a great node module that integrated Key metrics with NodeJS.

CPU BusySiege is a good command-line benchmark took, check out my guide here.

http://www.loader.io is a great service for hitting your website from across the world.

AWS MongoDB Test

End to End Analytics

You should be capturing analytics from end to end (failed logins, invalid packets, user usage etc).  Caching content and blocking bad uses can them be implemented to improve performance.

Developer access

All platforms have varied access to allow developers in to change things.  I prefer the awesome http://www.c9.io for connecting to my servers.

C9 IDE

If you go with high-level SaaS (Microsft CRM, Sitecore CRM etc) you may be locked into outdated software that is hard for developers to modify and support.

Don’t forget your customers.

At this point, you will have a million thoughts on possible solutions and problems but don’t forget to concentrate on what you are developing and it is viable.  Do you have validated customer needs and will you be working to solve those problems?

Project Pre-Mortem

Don’t be afraid to research what could go wrong, are you about to spend money on adding another layer of software to improve something but not solve the problem at hand?

It is a good idea to quickly guess what could go wrong before deciding on a way forward.

  • Server scalability
  • Features not polished
  • Does not meet customer needs
  • Monetization Issues
  • Unknown usage costs
  • Bad advice from consultants
  • Vendors collapsing or being bought out.

Long game

Make sure you choose a vendor that won’t go broke?  Smaller vendors like Parse were gobbled up by Facebook and Facebook closed their doors leaving customers in the lurch.  Even C9.io has been purchased by AWS and their future is uncertain.  Will Linode and Digital Ocean be able to compete against AWS and Azure? Don’t lock yourself into one solution and always have a backup plan.

Do

  • Do know what your goal is.
  • Make a start.
  • Iterate in public.
  • Test everything.

Don’t

  • Don’t trust what you have been told.
  • Don’t develop without a goal.
  • Don’t be attracted to buzzwords, new tech and shiny objects.

Good luck and happy coding.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Edit v1.11

Filed Under: Backup, Business, Cloud, Development, Hosting, Linux, MySQL, NodeJS, Scalability, Scalable, Security, ssl, Uncategorized Tagged With: digital disruption, Innovation

Setting up a fast distributed MySQL environment with SSL

September 13, 2016 by Shane Bishop

The following is a guest post from Shane Bishop from https://ewww.io/ (developer of the awesome EWWW Image Optimizer plugin for WordPress). Ready my review of this plugin here.

ewww3

Setting up a fast distributed MySQL environment with SSL

I’m a big fan of redundancy and distribution when it comes to network services. I don’t like to keep all my servers in one location, or even with a single provider. I’m currently using three different providers right now for various services. But when it comes to database communication, this poses a bit of a problem. Naturally, you would implement a firewall to restrict connections only to specific IP addresses, but if you’ve got servers all across the United States (or the globe), the communication is completely unencrypted by default.

Fortunately, MySQL has the ability to secure those communications and even require that specific user accounts use encryption for all communication. So, I’m going to show you how to setup that encryption, give a brief overview of setting up MySQL replication and give you several examples of different ways to securely connect to your database server(s). I used several different resources in setting this up for EWWW I.O. but none of them had everything I needed, or some had critical errors in them:

Setting up MySQL and secure connections

Getting Started with MySQL over SSL

How to enable SSL for MySQL server and client

I use Debian 8 (fewer major releases than Ubuntu, and rock-solid stability), so these instructions will apply to MySQL 5.5 and PHP 5.6, although most of it will work fine on any system. If you aren’t using PHP, you can just skip that section, and apply this to MySQL client connections, and replication. I’ll try to point out any areas where you might have difficulty on other versions, and you’ll need to modify any installation steps that use apt-get to use yum instead if you’re on CentOS, RHEL, or SuSE. If you’re running Windows, sorry, but just stop it. I would never trust a Windows server to do any of these things on the public Internet even with secured connections. You could attempt to do some of this on a Windows box for testing, but you can setup a Linux virtual machine using VirtualBox for free if you really want to test things out locally.

Setup the Server

First, we need to install the MySQL server on our system (you should always use sudo, instead of logging in as root, as a matter of “best practice”):

sudo apt-get install mysql-server

The installation will ask for a root password, and for you to confirm it. This is the account that has full and complete privileges on your MySQL server, so pick a good one. If this gets compromised, all is lost (or very nearly). Backups are your best friend, but even then it might be difficult to know what damage was done, and when. You’ll also want to run this, to make sure your server isn’t allowing test/guest access:

sudo mysql_secure_installation

You should answer yes to just about everything, although you don’t have to change your root password if you already set a good one. And just to make sure I’m clear on this. The root password here is not the same as the root password for the server itself. This root password is only for MySQL. You shouldn’t even ever use the root login on your server, EVER. It should be disabled so that you can only run privileged operations as sudo. Which you can do like this:

sudo passwd -l root

That command just disabled the root user, and should also be a good test to verify you already have an account that can sudo successfully, although I’d recommend testing it with something a little less drastic before you disable the root login.

Generating Certificates & Keys for the server

Normally, setting up secure connections involves purchasing a certificate from an established Certificate Authority (CA), and then downloading that certificate to your machine. However, the prevailing attitude with MySQL seems to be that you should build your own CA so that no one else has any influence on the private keys used to issue your server certificates. That said, you can still purchase a cert if that feels like the right route to go for you. Every organization has different needs, and I’m a bit new to the MySQL SSL topic, so I won’t pretend to be an expert on what everyone should do.

The Certificate Authority consists of a private key, and a CA certificate. These are then used to generate the server and client certificates. Each time you generate a certificate, you first need a private key. These private keys cannot be allowed to fall into the wrong hands, but you also need to have them available on the server, as they are used in establishing a secure connection. So if anyone else has access to your server, you should make sure the permissions are set so that only the root user (or someone with sudo privileges) can access them.

The CA and your server’s private key are used to authenticate the certificate that the server uses when it starts up, and the CA certificate is also used to validate any incoming client certificates. By the same token, the client will use that same CA certificate to validate the server’s certificate as well. I store the bits necessary in the /etc/mysql/ directory, so navigate into that directory, and we’ll use that as a sort of “working directory”. Also, the first command here lets you establish a “sudo shell” so that you don’t have to type sudo in front of every command. Let’s generate the CA private key:

sudo -s
cd /etc/mysql/
openssl genrsa 2048 > cakey.pem

Next, generate a certificate based on that private key:

openssl req -sha256 -new -x509 -nodes -days 3650 -key cakey.pem > cacert.pem

Of note are the -sha256 flag (do not use -sha1 anymore, it is weak), and the certificate expiration, set by “-days 3650” (10 years). Answer all the questions as best you can. The common name (CN) here is usually the hostname of the server, and I try to use the same CN throughout the process, although it shouldn’t really matter what you choose as the CN. If you follow my instructions, the CN will not be validated, only the client and server certificates get validated against the CA cert, as I already mentioned. Especially if you have multiple servers, and multiple servers acting as clients, the CN values would be all over the place, so best to keep it simple.

So the CA is now setup, and we need a private key for the server itself. We’ll generate the key and the certificate signing request (CSR) all at once:

openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout server-key.pem > server-csr.pem

This will ask many of the same questions, answer them however you want, but be sure to leave the passphrase empty. This key will be needed by the MySQL service/daemon on startup, and a password would prevent MySQL from starting automatically. We also need to export the private key into the RSA format, or MySQL won’t be able to read it:

openssl rsa -in server-key.pem -out server-key.pem

Lastly, we create the server certificate using the CSR (based on the server’s private key) along with the CA certificate and key:

openssl x509 -sha256 -req -in server-csr.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > server-cert.pem

Now we have what we need for the server end of things, so let’s edit our MySQL config in /etc/mysql/my.cnf to contain these lines in the [mysqld] section:

ssl-ca=/etc/mysql/cacert.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

If you are using Debian, those lines are probably already present, but commented out (with a # in front of them). Just remove the # from those three lines. If this is a fresh install, you’ll also want to set the bind-address so that it will allow communication from other servers:

bind-address = 198.51.100.10 # replace this with your actual IP address

or you can let it bind to all interfaces (if you have multiple IP addresses):

bind-address = *

Then restart the MySQL service:

sudo service mysql restart

Permissions

If this is an existing MySQL setup, you’ll want to wait until you have all the client connections setup to require SSL, but on a new install, you can run this to setup a new user with SSL required:

GRANT ALL PRIVILEGES ON 'database'.* TO 'database-user'@'%' IDENTIFIED BY 'reallysecurepassword' REQUIRE SSL;

I recommend creating individual user accounts for each database you have, so substitute the name of your database in the above command, as well as replacing the database-user and “really secure password” with suitable values. The command above also allows them to connect from anywhere in the world, and you may only want them to connect from a specific host, so you would replace the ‘%’ with the IP address of the client. I prefer to use my firewall to determine who can connect, as it is a bit easier than running a GRANT statement for every single host that is permitted. One could use a wildcard hostname like *.example.com but that would entail a DNS lookup for every connection unless you make sure to list all your addresses in /etc/hosts on the server (yuck). Additionally, using your firewall to limit which hosts can connect helps prevent brute-force attacks. I use ufw for that, which is a nice and simple command-line interface to iptables. You also need to run this after you GRANT privileges:

FLUSH PRIVILEGES;

Generating a Certificate and Key for the client

With most forms of encryption, only the server needs a certificate and key, but with MySQL, both server and client can have encryption keys. A quick test from my local machine indicated that it would automatically trust the server cert when using the MySQL client, but we’ll setup the client to use encryption just to be safe. Since we already have a CA setup on the server, we’ll generate the client cert and key on the server. First, the private key and CSR:

openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout client-key.pem > client-csr.pem

Again, we need to export the key to the RSA format, or MySQL won’t be able to view it:

openssl rsa -in client-key.pem -out client-key.pem

And last step is to create the certificate, which is again based off a CSR generated from the client key, and sign the certificate with the CA cert and key:

openssl x509 -sha256 -req -in client-req.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > client-cert.pem

We now need to copy three files to the client. The certs are just text files, so you can copy and paste them, or you can use scp to transfer them:

  • cacert.pem
  • client-key.pem
  • client-cert.pem

If you don’t need the full mysql-server on the client, or you just want to test it out, you can install the mysql-client like so:

sudo apt-get install mysql-client

Then, open /etc/mysql/my.cnf and put these three lines in the [client] section (usually near the top):

ssl-ca = /etc/mysql/cacert.pem
ssl-cert = /etc/mysql/client-cert.pem
ssl-key = /etc/mysql/client-key.pem

You can then connect to your server like so:

mysql -h 198.51.100.10 -u database-user -p

It will ask for a password, which you set to something really awesome and secure, right? At the MySQL prompt, you can just type the following command shortcut, and look for the SSL line, which should say something like “Cipher in use is …”

\s

You can also specify the –ssl-ca, –ssl-cert, and –ssl-key settings on the command line in the ‘mysql’ command to set the locations dynamically if need be. You may also be able to put them in your .my.cnf file (the leading dot makes it a hidden file, and it should live in ~/ which is your home directory). So for me that might be /home/shanebishop/.my.cnf

Using SSL for mysqldump

To my knowledge, mysqldump does not use the [client] settings, so you can specify the cert and key locations on the command line like I mentioned, or you can add them to the [mysqldump] section of /etc/mysql/my.cnf. To make sure SSL is enabled, I run it like so:

mysqldump --ssl -h 198.51.100.10 -u database-user -p reallysecurepassword > database-backup.sql

Setup Secure Connection from PHP

That’s all well and good, but most of the time you won’t be manually logging in with the mysql client, although mysqldump is very handy for automated nightly backups. I’m going to show you how to use SSL in a couple other areas, the first of which is PHP. It’s recommended to use the “native driver” packages, but from what I could see, the primary benefit of the native driver is decreased memory consumption.  There just isn’t much to see in the way of speed improvement, but perhaps I didn’t look long enough. However, being one to follow what the “experts” say, you can install MySQL support in PHP like so:

sudo apt-get install php5-mysqlnd

If you are using PHP 7 on a newer version of Ubuntu, the “native driver” package is now standard:

sudo apt-get install php7.0-mysql

If you are on a version of PHP less than 5.6, you can use the example code at the Percona Blog. However, in PHP 5.6+, certificate validation is a bit more strict, and early versions just fell over when trying to use the mysqli class with self-signed certificates like we have. Now that the dust has settled with PHP 5.6 though, we can connect like so:

<?php
$server = '198.51.100.10';
$dbuser = 'database-user';
$dbpass = '[email protected]@s$worD';
$database = 'database';
$connection = mysqli_init();
if ( ! mysqli_real_connect( $connection, $server, $dbuser, $dbpass', $database, 3306, '/var/run/mysqld/mysqld.sock', MYSQLI_CLIENT_SSL ) ) { //optimize1
    error_log( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
    die( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
}
$result = mysqli_query( $connection, "SHOW STATUS like 'Ssl_cipher'" );
print_r( mysqli_fetch_assoc( $result ) );
mysqli_close( $connection );
?>

Saving this as mysqli-ssl-test.php, you can run it like this, and you should get similar output:

[email protected]:~$ php mysqli-ssl-test.php
Array
(
  [Variable_name] => Ssl_cipher
  [Value] => DHE-RSA-AES256-SHA
)

Setup Secure (SSL) Replication

That’s all fine for a couple servers, but at EWWW.IO. I quickly realized I could speed things up if each server had a copy of the database. In particular, a significant speed improvement can be had if you setup all SELECT queries to use the local database (replicated from the master). While a query to the master server might take 50ms or more, querying the local database gives you sub-millisecond query times. Beyond that, I also wanted to have redundant write ability, so I set up two masters that would replicate off each other and ensure I never miss an UPDATE/INSERT/DELETE transaction if one of them dies. I’ve been running this setup since the Fall of 2013, and it has worked quite well. There are a few things you have to watch out for. The biggest is if a master server has a hard reboot, and MySQL doesn’t get shut down properly, you have to re-setup the replication on any slaves that communicate with that master, as the binary log gets corrupted. You also have to resync the other master in a similar manner.

The other things to be careful of are conflicting INSERT statements. If you try to INSERT two records with the same primary key from two different servers, it will cause a collision if those keys are set to be UNIQUE. You also have to be careful if you are using numerical values to track various data points. Use MySQLs built-in arithmetic, rather than trying to query a value, add to it in your code, and then updating the new value in a separate query.

So first I’ll show you how to setup replication (just basic master to slave), and then how to make sure that data is encrypted in transit. We should already have the MySQL server installed from above, so now we need to make some changes to the master configuration in /etc/mysql/my.cnf. All of these changes should be made in the [mysqld] section:

max_connections = 500 # the default is 100, and if you get a lot of servers running in your pool, that may not cut it
server-id = 1 # any numerical value will do, but every server should have a unique ID, I started at 1 for simplicity
log-bin = /var/log/mysql/mysql-bin.log
log-slave-updates = true # this one is only needed if you're running a dual-master setup

I’ve also just discovered that it is recommended to set sync_binlog to 1 when using InnoDB, which I am. I haven’t had a chance to see how that impacts performance, so I’ll update this after I’ve had a chance to play with it. The beauty of that is it *should* avoid the problems with a server crash that I mentioned above. At most, you would lose 1 transaction due to an improper server shutdown. All my servers use SSD, so the performance hit should be minimal, but if you’re using regular spinning platters, then be careful with the sync_binlog setting.

Next, we do some changes on the slave config:

server-id = 2 # make sure the id is unique
report-host = myserver.example.com # this should also be unique, so that your master knows which slave it is talking to
log-bin = /var/log/mysql/mysql-bin.log

Once that is setup, you can run a GRANT statement similar to the one above to add a user to do replication, or you can just give that user REPLICATION_SLAVE privileges.

IMPORTANT: If you run this on an existing slave-master setup, it will break replication, as the REQUIRE SSL statement seems to apply to all privileges granted to this user, and we haven’t told it what certificate and key to use. So run the CHANGE MASTER TO statement further down, and then come back here to enforce SSL for your replication user.

GRANT REPLICATION SLAVE ON *.* TO 'database-user'@'%' REQUIRE SSL;

Now we’re ready to synchronize the database from the master to the slave the first time. The slave needs 3 things:

  1. a dump of the existing data
  2. the binary log filename, as MySQL adds a numerical identifier to the log-bin setting above, and increments this periodically as it the binary logs hit their max size
  3. the position within the binary log where the slave should start applying changes

The hard way (that I used 3 years ago), can be found in the MySQL Documentation. The easy way is to use mysqldump (found on a different page in the MySQL docs), which you probably would have used anyway for obtaining a dump of the existing data:

mysqldump --all-databases --master-data -u root -p > dbdump.db

By using the –master-data flag, it will insert items #2 and #3 into the SQL file generated, and you will avoid having to hand type the binary log filename and coordinates. At any rate, you then need to login via your MySQL client on the slave server, and run a few commands at the MySQL prompt to prep the slave for the import (replacing the values as appropriate:

mysql -uroot -p
mysql> STOP SLAVE;
mysql> CHANGE MASTER TO
    -> MASTER_HOST='master_host_name',
    -> MASTER_USER='replication_user_name',
    -> MASTER_PASSWORD='replication_password';
exit

Then you can import the dbdump.db file (copy it from the master using SCP or SFTP):

mysql -uroot -p < dbdump.db

Once that is imported, we want to make sure our replication is using SSL. You can also run this on an existing server to upgrade the connection to SSL, but be sure to STOP SLAVE first:

mysql> CHANGE MASTER TO MASTER_SSL=1,
    -> MASTER_SSL_CA='/etc/mysql/cacert.pem',
    -> MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
    -> MASTER_SSL_KEY='/etc/mysql/client-key.pem';

After that, you can start the slave:

START SLAVE;

Give it a few seconds, but you should be able to run this to check the status pretty quick:

SHOW SLAVE STATUS\G;

A successfully running slave should say something like “Waiting for master to send event”, which simply indicates that it has applied all transactions from the master, and is not lagging behind.

If you have additional slaves to setup, you can use the same exact dbdump.db and all the SQL statements that followed the mysqldump command, but if you add them a month or so down the road, there are two ways of doing it:

  1. Grab a fresh database dump using the mysqldump command, and repeat all of the steps that followed the mysqldump command above.
  2. Stop the MySQL service on an existing slave and the new slave. Then copy the /var/lib/mysql/ folder to the new slave and make sure it is owned by the MySQL user/group: chown -R mysql:mysql /var/lib/mysql/ Lastly, start both slaves up again, and they’ll catch up pretty quick to the master.

Conclusion

In a distributed environment, securing your MySQL communications is an important step in preventing unauthorized access to your services. While it can be a bit daunting to put all the pieces together, it is well worth the effort to make sure no one can intercept traffic from your MySQL server(s). The EWWW I.O. API supports encryption at every layer of the stack, to make sure our customer information stays private and secure. Doing so in your environment creates trust with your user base, and customer trust is a precious commodity.

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Shane Bishop

Contact: https://ewww.io/contact-us/

Twitter: https://twitter.com/nosilver4u

Facebook: https://www.facebook.com/ewwwio/

Check out the fearby.com guide on bulk optimizing images automatically in WordPress.  Official Site here: https://ewww.io/

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.1 added shodan.io info

Filed Under: Cloud, Development, Linux, MySQL, Scalable, Security, ssl, VM Tagged With: certificate, cloud, debian, destributed, encrypted, fast, myswl, ssl

Connecting to an AWS EC2 Ubuntu instance with Cloud 9 IDE as user ubuntu and root

September 1, 2016 by Simon Fearby

Recently I setup an Amazon EC2 Ubuntu Server instance and wanted to connect it to the awesome Cloud 9 IDE. I was sick of interacting with a server through terminal windows.

Use this link and get $19 free credit with Cloud 9: https://c9.io/c/DLtakOtNcba

c9io15-004

Cloud 9 IDE (sample screenshot)

C9 IDE

Previously I was using Digital Ocean (my Digital Ocean setup guide here) and this was simple, you get a VM and you have a root account and you do what you want.  Amazon AWS however, have extra layers of security that prevent logging in as root via SSH and that can be a pain with Cloud 9 as your workspace tree is restricted to the ~/ (home) folder.

Below are the steps you need to connect to an AWS instance with user “ubuntu” and “root” with Cloud 9.

Connecting to an AWS instance with Cloud 9 as user “ubuntu”

1. Purchase and set-up your AWS instance (my guide here).

2. You need to be able to login to your AWS server from a terminal prompt (from OSX).  This may include opening port 22 the AWS Security Group panel. Info on SSH logins here.

ssh -i ~/.ssh/yourawsicskeypair.pem [email protected]

3. On your AWS server (from step 2) Install NodeJS.

You will know node is installed if you get a version returned when typing the following bash command.

node -v

tip: If node is not installed you can run the Cloud 9 pre-requisites script (that includes node).

curl -L https://raw.githubusercontent.com/c9/install/master/install.sh | bash

4. Ensure you have created SSH key on Cloud 9 (guide here).

5. Copy your Cloud 9 SSH key to the clipboard.

6. On your AWS server (in step 2) edit the ~/.ssh/authorized_keys file and paste in the Cloud 9 SSH key (after you AWS key pair that was added from the setup of AWS) to a new line and save the file.

7. Log in to Cloud 9 and click create Workspace then Remote SSH Workspace.

  • Name your workspace (all lowercase and no spaces).
  • Username: ubuntu
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be ~/

c9io15-000

8. Click Create Workspace

c9io15-002

9. If all goes well you will have a prompt to install the prerequisites.

c9io15-001

If this fails check out the Cloud 9 guide here.

Troubleshooting: I had errors like “Project directory does not exist or is not writable and “Unable to change File System Path in SSH Workspace” because I was trying to set the workspace path as “/” (this is not possible on AWS with the “ubuntu” account.

10. Now you should have a web-based IDE that allows you to browse your server, create and edit files, run termials instances that will reconnect if your net connection or browser tab drops out (you can even go to a different machine and continue with your session).

c9io15-003

Connecting to an AWS instance with Cloud 9 as user “root

Connecting to your server as the “ubuntu” server is fine if you just need to work in your “ubuntu” home folder.  As soon as you want to start changing other settings outside of your home folder you are stuck.  Granting “ubuntu” higher privileges server wide is a bad idea so here is how you can enable “root” login via SSH access.

WARNING: Logging in as ROOT IS BAD, you should only allow Root Login for short periods and it is advisable to remove root login abilities as soon as you do not need them or in production.

Having root access while developing or building a new server saves me bucket loads of time so lets allow it.

1. Follow step 1 to 5 in the steps above (setup AWS, ssh access via terminal, install node, create cloud 9 ssh key, copy the cloud 9 ssh key to the clipboard).

2. SSH to your AWS server and edit the following file:

sudo nano /etc/ssh/sshd_config
# -- Make the following changes
# PermitRootLogin without-password
PermitRootLogin = yes

Save.

3. Backup your root authorised keys file

sudo cp /root/.ssh/authorized_keys /root/.ssh/authorized_keys.bak

4. Edit the root authorized_keys file and paste in your Cloud 9 SSH Key.

c9io15-005

5. Now you can create a Cloud 9 Connection to your server with root

  • Name your workspace (all lowercase and no spaces).
  • Username: root
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be /

c9io15-007

tip:  If you have not added you SSH key correctly you will receive this error when connecting.

c9io15-006

6. You should now be able to connect to AWS ec2 instances with Cloud 9 as root and configure/do anything you want without switching to shell windows.

c9io15-009

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Enjoy

If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.6 security

Filed Under: Cloud, Domain, Hosting, Linux, NodeJS, Security, ssl Tagged With: AWS, c9, cloid, ssh, terminal

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

August 16, 2016 by Simon Fearby

This blog lists the actions I went through to setup an AWS EC2 Ubuntu Server and add the usual applications. This follows on from my guide to setup a Digital Ocean server server guide and Vultr setup guide.

Variable Pricing

Amazon Web Services give developers 12 months of free access to a range of products. Amazon don’ t have flat rate fees like Digital Ocean instead, AWS grant you a minimum level of CPU credits for each server type. A “t2.micro” (1CPU/1GB ram $0.013c/hour server) gets 6 CPU credits an hour.  That is enough for a CPU to run at 10% all the time, you can bank up to 144 CPU credits that you can use when the application spikes.  The “t2.micro” is the free tier server (costs nothing for 12 months), but when the trial runs our that’s $9.50 a month.  The next server up is a “t2.small” (1 CPU, 2GB ram) and you get 12 CCPUredits and hour and can bank 288, that enough for 20% CPU usage all the time.  

The “t2.medium” server (2 CPU’s, 4GB Ram), allows 40% CPU usage credits, 24 CPU credits an hour with 576 bankable.  That server costs $0.052c and hour and is $38 a month. Don’t forget to cache content. 

I used about 40 CPU credits generating a 4096bit secure prime Diffie-Hellman key for an SSL Certificate on my t2.micro server. More Information on AWS Instance pricing here and here.

Creating an AWS Account (Free Trial)

The signup process is simple.

  1. Create a free AWS account.
  2. Enter your CC details (for any non-free services) and submit your account.

It will take 2 days or more for Amazon to manually approve your account.  When you have been approved, navigate to https://console.aws.amazon.com login and set your region in the top right next to your name (in my case I will go with Australia ‘ap-southeast-2‘).

My console home is now: https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-2#LaunchInstanceWizard

Create a Server

You can follow the prompts to set up a free tier EC2 Ubuntu server here.

1. Choose Ubuntu EC2

2. Choose Instance Type: t2-micro (1x CPU, 1GB Ram)

3. Configure Instance: 1

4. Add Storage: /dev/sda1, 8GB+, 10-3000 IOPS

5. Tag Instance: Your own role specific tags

6. Configure Security Group: Default firewall rules.

7. Review

Tip: Create a 25GB volume (instead of 8GB) or you will need to add an extra volume mount it with the following commands.

sudo nano /etc/fstab
(append the following line)
/dev/xvdf    /mydata    ext4    defaults,nofail    0    2
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /mydata
mount -a
cd /mydata
ls -al

Part of theEC2 server setup was to save a .PEM file to your SSH folder on your local PC ( ~/.ssh/mysererkeypair.pem).

You will need to secure the file:

chmod 400 ~/.ssh/mysererkeypair.pem

Before we connect to the server we need to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

DNS

You will need to assign a status IP to your server (apparently the public IP is not static). Here is a good read on connecting a domain name to the IP and assigning an elastic IP to your server.  Once you have assigned an elastic IP you can point your domain to your instance.

AWS IP

Installing the Amazon Command Line Interface utils on your local PC

This is required to see your servers console screen and then connect via SSH.

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
/usr/local/bin/aws --version

You now need to configure your AWS CLI, first generate Access Keys here. While you are there setup Multi-Factor Authentication with Google Authenticator.

aws configure
> AWS Access Key ID: {You AWS API Key}
> AWS Secret Access Key: {You AWS Secret Accss Key}
> Default region name: ap-southeast-2 
> Default output format: text

Once you have configured your CLI you can connect and review your Ubuntu console output (the instance ID can be found in your EC2 server list).

aws ec2 get-console-output --instance-id i-123456789

Now you can hopefully connect to your server and accept any certificates to finish the connection.

ssh -i ~/.ssh/myserverpair.pem [email protected]amazonaws.com

AWS Console

Success, I can now access my AWS Instance.

Setting the Time and Daylight Savings.

Check your time.

sudo hwclock --show
> Fri 21 Oct 2016 11:58:44 PM AEDT  -0.814403 seconds

My Daylight savings have not kicked in.

Install ntp service

sudo apt install ntp

Set your Timezone

dpkg-reconfigure tzdata

Go to http://www.pool.ntp.org/zone/au and find my NTP server (or go here if you are outside Australia)

server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org
server 3.au.pool.ntp.org

Add the NTP servers to “/etc/ntp.conf” and restart he NTP service.

sudo service ntp restart

Now check your time again and you should have the right time.

sudo hwclock --show
> Fri 21 Oct 2016 11:07:38 PM AEDT  -0.966273 seconds

🙂

Installing NGINX

I am going to be installing the latest v1.11.1 mainline development (non-legacy version). Beware of bugs and breaking changes here.

sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

NGINX is now installed. Try and get to your domain via port 80 (if it fails to load, check your firewall).

Installing NodeJS

Here is how you can install the latest NGINX (development build), beware of bugs and frequent changes. Read the API docs here.

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

NodeJS is installed.

Installing MySQL

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
sudo mysql_install_db
sudo mysql_secure_installation
service mysql status

Install PHP 7.x and PHP7.0-FPM

I am going to install PHP 7 due to the speed improvements over 5.x.  Below were the commands I entered to install PHP (thanks to this guide)

sudo add-apt-repository ppa:ondrej/php
sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm
sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

NGINX Configuration

NGINX can be a bit tricky to setup for newbies and your configuration will certainly be different but here is mine (so far):

File: /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /var/run/nginx.pid;
events {
        worker_connections 1024;
        multi_accept on;
}
http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;
        client_body_buffer_size      128k;
        client_max_body_size         10m;
        client_header_buffer_size    1k;
        large_client_header_buffers  4 4k;
        output_buffers               1 32k;
        postpone_output              1460;

        proxy_headers_hash_max_size 2048;
        proxy_headers_hash_bucket_size 512;

        client_header_timeout  1m;
        client_body_timeout    1m;
        send_timeout           1m;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        # ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        gzip on;
	gzip_disable "msie6";
	gzip_vary on;
	gzip_proxied any;
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_min_length 256;
	gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6

        access_log /var/log/nginx/myservername.com.log;

        root /usr/share/nginx/www;
        index index.php index.html index.htm;

        server_name www.myservername.com myservername.com localhost;

        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking

        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;


        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;

        # This is handled with the header above.
        #rewrite ^/(.*) https://myservername.com/$1 permanent;

        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }

        fastcgi_param PHP_VALUE "memory_limit = 512M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
                try_files $uri =404;

                # include snippets/fastcgi-php.conf;

                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }

        # deny access to .htaccess files, if Apache's document root
        #location ~ /.ht {
        #       deny all;
        #}
}

Test and Reload NGINX Config

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Don’t forget to test PHP with a script that calls ‘phpinfo()’.

Install PhpMyAdmin

Here is how you can install the latest branch of phpmyadmin into NGINX (no apache)

cd /usr/share/nginx/www
mkdir my/secure/secure/folder
cd my/secure/secure/folder
sudo apt-get install git
git clone --depth=1 --branch=STABLE https://github.com/phpmyadmin/phpmyadmin.git

If you need to import databases into MySQL you will need to enable file uploads in PHP and set file upload limits.  Review this guide to enable uploads in phpMyAdmin.  Also if your database is large you may also need to change the “client_max_body_size” settings on nginx.conf ( see guide here ). Don’t forget to disable uploads and reduce size limits in NGINX and PHP when you have uploaded databases.

Note: phpMyAdmin can be a pain to install so don’t be afrait of using an alternative management gui.  Here is a good list of MySQL management interfaces. Also check your OS App store for native mysql database management clients.

Install an FTP Server

Follow this guide here then..

sudo nano /etc/vsftpd.conf
write_enable=YES
sudo service vsftpd restart

Install: oracle-java8 (using this guide)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
cd /usr/lib/jvm/java-8-oracle
ls -al
sudo nano /etc/environment
> append "JAVA_HOME="/usr/lib/jvm/java-8-oracle""
echo $JAVA_HOME
sudo update-alternatives --config java

Install: ncdu – Interactive tree based folder usage utility

sudo apt-get install ncdu
sudo ncdu /

Install: pydf – better quick disk check tool

sudo apt-get install pydf
sudo pydf

Install: rcconf – display startup processes (handy when confirming pm2 was running).

sudo apt-get install rcconf
sudo rcconf

I started “php7.0-fpm” as it was not starting on boot.

I had an issue where PM2 was not starting up at server reboot and reporting to https://app.keymetrics.io.  I ended up repairing the /etc/init.d/pm2-init.sh as mentioned here.

sudo nano /etc/init.d/pm2-init.sh
# edit start function to look like this
...
start() {
    echo "Starting $NAME"
    export PM2_HOME="/home/ubuntu/.pm2" # <== add this line
    super $PM2 resurrect#
}
...

Install IpTraf – Network Packet Monitor

sudo apt-get install iptraf
sudo iptraf

Install JQ– JSON Command Line Utility

sudo apt-get install jq
# Downlaod and display json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Install Ruby – Below commands a bit out of order due to some command not working for unknown reasons.

sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
sudo gem install bundler
sudo git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
sudo git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
sudo git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
sudo echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
sudo echo 'eval "$(rbenv init -)"' >> ~/.bashrc
sudo exec $SHELL
sudo rbenv install 2.2.3
sudo rbenv global 2.2.3
sudo rbenv rehash
ruby -y
sudo apt-get install ruby-dev

Install Twitter CLI – https://github.com/sferik/t

sudo gem install t
sudo t authorize# follow the prompts

Mutt (send mail by command line utility)

Help site: https://wiki.ubuntu.com/Mutt

sudo nano /etc/hosts
127.0.0.1 localhost localhost.localdomain xxx.xxx.xxx.xxx yourdomain.com yourdomain

Configuration:

sudo apt-get install mutt
[configuration none]
sudo nano /etc/postfix/main.cf
[setup a]

Configure postfix guide here

Extend the History commands history

I love the history command and here is how you can expand it’s hsitory and ignore duplicates.

HISTSIZE=10000
HISTCONTROL=ignoredups

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

Cont…

Next: I will add an SSL cert, lock down the server and setup Node Proxies.

If this guide was helpful please consider donating a few dollars to keep me caffeinated.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.61 added vultr guide link

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Security Tagged With: AWS, server

Application scalability on a budget (my journey)

August 12, 2016 by Simon Fearby

If you have read my other guides on https://www.fearby.com you may tell I like the self-managed Ubuntu servers you can buy from Digital Ocean for as low as $5 a month  (click here to get $10 in free credit and start your server in 5 minutes ). Vultr has servers as low as $2.5 a month. Digital Ocean is a great place to start up your own server in the cloud, install some software and deploy some web apps or backend (API/databases/content) for mobile apps or services.  If you need more memory, processor cores or hard drive storage simple shutdown your Digital Ocean server, click a few options to increase your server resources and you are good to go (this is called “scaling up“). Don’t forget to cache content to limit usage.

This scalability guide is a work in progress (watch this space). My aim is to get 2000 concurrent users a second serving geo queries (like PokeMon Go) for under $80 a month (1x server and 1x mongoDB cluster).  Currently serving 600~1200/sec.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Estimating Costs

If you don’t estimate costs you are planning to fail.

"By failing to prepare you are preparing to fail." - Benjamin Frankin

Estimate the minimum users you need to remain viable and then the expected maximum uses you need to handle. What will this cost?

Planning for success

Anyone who has researched application scalability has come across articles on apps that have launched and crashed under load at launch.  Even governments can spend tens of millions on developing a scalable solution, plan for years and fail dismally on launch (check out the Australian Census disaster).  The Australian government contracted IBM to develop a solution to receive up to 15 million census submissions between the 28th of July to the 5th of September. IBM designed a system and a third party performance test planned up to 400 submissions a second but the maximum submissions received on census night before the system crashed was only o154 submissions a second. Predicting application usage can be hard, in the case of the Australian census the bulk of people logged on to submit census data on the recommended night of the 9th of August 2016.

Sticking to a budget

This guide is not for people with deep pockets wanting to deploy a service to 15 million people but for solo app developers or small non-funded startups on a serious budget.  If you want a very reliable scalable solution or service provider you may want to skip this article and check out services by the following vendors.

  • Firebase
  • Azure (good guides by Troy Hunt: here, here and here).
  • Amazon Web Services
  • Google Cloud
  • NGINX Plus

The above vendors have what seems like an infinite array of products and services that can form part of your solution but beware, the more products you use the more complex it will be and the higher the costs.  A popular app can be an expensive app. That’s why I like Digital Ocean as you don’t need a degree to predict and plan you servers average usage and buy extra resource credits if you go over predicted limits. With Digital Ocean you buy a virtual server and you get known Memory, Storage and Data transfer limits.

Let’s go over topics that you will need to consider when designing or building a scalable app on a budget.

Application Design

Your application needs will ultimately decide the technology and servers you require.

  • A simple business app that shares events, products and contacts would require a basic server and MySQL database.
  • A turn-based multiplayer app for a few hundred people would require more server resources and endpoints (a NGINX, NODEJS and an optimized MySQL database would be ok).
  • A larger augmented reality app for thousands of people would require a mix of databases and servers to separate the workload (a NGINX webserver and NodeJS powered API talking to a MySQL database to handle logins and a single server NoSQL database for the bulk of the shared data).
  • An augmented reality app with tens of thousands of users (a NGINX web server, NodeJS powered API talking to a MySQL database to handle logins and NoSQL cluster for the bulk of the shared data).
  • A business critical multi-user application with real-time chat – are you sure you are on a budget as this will require a full solution from Azure Firebase or Amazon Web Serers.

A native app, hybrid app or full web app can drastically change how your application works ( learn the difference here ).

Location, location, location.

You want your server and resources to be as close to your customers as possible, this is one rule that cannot be broken. If you need to spend more money to buy a server in a location closer to your customers do it.

My Setup

I have a Digital Ocean server with 2 cores and 2GB of ram in Singapore that I use to test and develop apps. That one server has MySQL, NGINX, NodeJS , PHP and many scripts running on it in the background.  I also have a MongoDB cluster (3 servers) running on AWS in Sydney via MongoDB.com.  I looked into CouchDB via Cloudant but needed the Geo JSON features with fair dedicated pricing. I am considering moving to an Ubuntu server off Digital Ocean (in Singapore) and onto AWS server (in Sydney). I am using promise based NodeJS calls where possible to prevent non blocking calls to the operating system, database or web.  Update: I moved to a Vultr domain (article here)

Here is a benchmark for HTTP and HTTPS request from Rural NSW to Sydney Australia, then Melbourne, then Adelaide the Perth then Singapore to a Node Server on an NGINX Server that does a call back to Sydney Australia to get a GeoQuery from a large database and return to back to the customer via Singapore.

SSL

SSL will add processing overheads and latency period.

Here is a breakdown of the hops from my desktop in Regional NSW making a network call to my Digital Ocean Server in Singapore (with private parts redacted or masked).

traceroute to destination-server-redacted.com (###.###.###.##), 64 hops max, 52 byte packets
 1  192-168-1-1 (192.168.1.1)  11.034 ms  6.180 ms  2.169 ms
 2  xx.xx.xx.xxx.isp.com.au (xx.xx.xx.xxx)  32.396 ms  37.118 ms  33.749 ms
 3  xxx-xxx-xxx-xxx (xxx.xxx.xxx.xxx)  40.676 ms  63.648 ms  28.446 ms
 4  syd-gls-har-wgw1-be-100 (203.221.3.7)  38.736 ms  38.549 ms  29.584 ms
 5  203-219-107-198.static.tpgi.com.au (203.219.107.198)  27.980 ms  38.568 ms  43.879 ms
 6  tengige0-3-0-19.chw-edge901.sydney.telstra.net (139.130.209.229)  30.304 ms  35.090 ms  43.836 ms
 7  bundle-ether13.chw-core10.sydney.telstra.net (203.50.11.98)  29.477 ms  28.705 ms  40.764 ms
 8  bundle-ether8.exi-core10.melbourne.telstra.net (203.50.11.125)  41.885 ms  50.211 ms  45.917 ms
 9  bundle-ether5.way-core4.adelaide.telstra.net (203.50.11.92)  66.795 ms  59.570 ms  59.084 ms
10  bundle-ether5.pie-core1.perth.telstra.net (203.50.11.17)  90.671 ms  91.315 ms  89.123 ms
11  203.50.9.2 (203.50.9.2) 80.295 ms  82.578 ms  85.224 ms
12  i-0-0-1-0.skdi-core01.bx.telstraglobal.net (Singapore) (202.84.143.2)  132.445 ms  129.205 ms  147.320 ms
13  i-0-1-0-0.istt04.bi.telstraglobal.net (202.84.243.2)  156.488 ms
    202.84.244.42 (202.84.244.42)  161.982 ms
    i-0-0-0-4.istt04.bi.telstraglobal.net (202.84.243.110)  160.952 ms
14  unknown.telstraglobal.net (202.127.73.138)  155.392 ms  152.938 ms  197.915 ms
15  * * *
16  destination-server-redacted.com (xx.xx.xx.xxx)  177.883 ms  158.938 ms  153.433 ms

160ms to send a request to the server.  This is on a good day when the Netflix Effect is not killing links across Australia.

Here is the route for a call from the server above to the MongoDB Cluster on an Amazon Web Services in Sydney from the Digital Ocean Server in Singapore.

traceroute to redactedname-shard-00-00-nvjmn.mongodb.net (##.##.##.##), 30 hops max, 60 byte packets
 1  ###.###.###.### (###.###.###.###)  0.475 ms ###.###.###.### (###.###.###.###)  0.494 ms ###.###.###.### (###.###.###.###)  0.405 ms
 2  138.197.250.212 (138.197.250.212)  0.367 ms 138.197.250.216 (138.197.250.216)  0.392 ms  0.377 ms
 3  unknown.telstraglobal.net (202.127.73.137)  1.460 ms 138.197.250.201 (138.197.250.201)  0.283 ms unknown.telstraglobal.net (202.127.73.137)  1.456 ms
 4  i-0-2-0-10.istt-core02.bi.telstraglobal.net (202.84.225.222)  1.338 ms i-0-4-0-0.istt-core02.bi.telstraglobal.net (202.84.225.233)  3.817 ms unknown.telstraglobal.net (202.127.73.137)  1.443 ms
 5  i-0-2-0-9.istt-core02.bi.telstraglobal.net (202.84.225.218)  1.270 ms i-0-1-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.157)  50.869 ms i-0-0-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.153)  49.789 ms
 6  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  107.395 ms  108.350 ms  105.924 ms
 7  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  105.911 ms 21459.tauc01.cu.telstraglobal.net (134.159.124.85)  108.258 ms  107.337 ms
 8  21459.tauc01.cu.telstraglobal.net (134.159.124.85)  107.330 ms unknown.telstraglobal.net (134.159.124.86)  101.459 ms  102.337 ms
 9  * unknown.telstraglobal.net (134.159.124.86)  102.324 ms  102.314 ms
10  * * *
11  54.240.192.107 (54.240.192.107)  103.016 ms  103.892 ms  105.157 ms
12  * * 54.240.192.107 (54.240.192.107)  103.843 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

It appears Telstra Global or AWS block the tracking of network path closer to the destination so I will ping to see how long the trip takes

bytes from ec2-##-##-##-##.ap-southeast-2.compute.amazonaws.com (##.##.##.##): icmp_seq=1 ttl=50 time=103 ms

It is obvious the longest part of the response to the client is not the GeoQuery on the MongoDB cluster or processing in NodeJS but the travel time for the packet and securing the packet.

My server locations are not optimal, I cannot move the AWS MongoDB to Singapore because MongoDB doesn’t have servers in Singapore and Digital Ocean don’t have servers in Sydney.  I should move my services on Digital Ocean to Sydney but for now, let’s see how far this Digital Ocean Server in Singapore and MongoDB cluster in Sydney can go. I wish I knew about Vultr as they are like Digital Ocean but have a location in Sydney.

Security

Secure (SSL) communication is now mandatory for Apple and Android apps talking over the internet so we can’t eliminate that to speed up the connection but we can move the server. I am using more modern SSL ciphers in my SSL certificate so they may slow down the process also. Here is a speed test of my servers cyphers. If you use stronger security so I expect a small CPU hit.

cipherspeed

fyi: I have a few guides on adding a commercial SSL certificate to a Digital Ocean VM and Updating OpenSSL on a Digital Ocean VM. Guide on configuring NGINX SSL and SSL. Limiting ssh connection rates to prevent brute force attacks.

Server Limitations and Benchmarking

If you are running your website on a shared server (e.g CPanel domain) you may encounter resource limit warnings as web hosts and some providers want to charge you more for moderate to heavy use.

Resource Limit Is Reached 508
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I have never received a resource limit reached warning with Digital Ocean.

Most hosts  (AWS/Digital Ocean/Azure etc) all have limitations on your server and when you exceed a magical limit they restrict your server or start charging excess fees (they are not running a charity).  AWS and Azure have different terminology for CPU credits and you really need to predict your applications CPU usage to factor in the scalability and monthly costs. Servers and databases generally have a limited IOPS (Input/Output operations a second) and lower tier plans offer lower IOPS.  MongoDB Atlas lower tiers have < 120 IOPS a second, middle tiers have  240~2400 IOPS and higher tiers have 3,000,20,000 IOPS

Know your bottlenecks

The siege HTTP stress testing tool is good, the below command will throw 400 local HTTP connections to your website.

#!/bin/bash
siege -t1m -c400 'http://your.server.com/page'

The results seem a bit low: 47.3 trans/sec.  No failed transactions through 🙂

** SIEGE 3.0.5
** Preparing 400 concurrent users for battle.
The server is now under siege...
Lifting the server siege.. done.

Transactions: 2803 hits
Availability: 100.00 %
Elapsed time: 59.26 secs
Data transferred: 79.71 MB
Response time: 7.87 secs
Transaction rate: 47.30 trans/sec
Throughput: 1.35 MB/sec
Concurrency: 372.02
Successful transactions: 2803
Failed transactions: 0
Longest transaction: 8.56
Shortest transaction: 2.37

Sites like http://loader.io/ allow you to hit your web server or web page with many hits a second from outside of your server.  Below I threw 50 concurrent users at a node API endpoint that was hitting a geo query on my MongoDB cluster.

nodebench50c

The server can easily handle 50 concurrent users a second. Latency is an issue though.

nodebench50b

I can see the two secondary MongoDB servers being queried 🙂

nodebench50a

Node has decided to only use one CPU under this light load.

I tried 100 concurrent users over 30 seconds. CPU activity was about 40% of one core.

nodebench50d

I tried again with a 100-200 concurrent user limit (passed). CPU activity was about 50% using two cores.

nodebench50e

I tried again with a 200-400 concurrent user limit over 1 minute (passed). CPU activity was about 80% using two cores.

nodebench50f

nodebench50g

It is nice to know my promised based NodeJS code can handle 400 concurrent users requesting a large dataset from GeoJSON without timeouts. The result is about the same as siege (47.6 trans/sec) The issue now is the delay in the data getting back to the user.

I checked the MongoDB cluster and I was only reaching 0.17 IOPS (maximum 100) and 16% CPU usage so the database cluster is not the bottleneck here.

nodebench50h

Out of curiosity, I ran a 400 connection benchmark to the node server over HTTP instead of HTTPS and the results were near identical (400 concurrent connections with 8000ms delay).

I really need to move my servers closer together to avoid the delays in responding. 47.6 served geo queries (4,112,640 a day) a second with a large payload is ok but it is not good enough for my application yet.

Limiting Access

I may limit access to my API based on geo lookups ( http://ipinfo.io is a good site that allows you to programmatically limit access to your app services) and auth tokens but this will slow down uncached requests.

Scale Up

I can always add more cores or memory to my server in minutes but that requires a shutdown. 400 concurrent users do max my CPU and raise the memory to above 80% so adding more cores and memory would be beneficial.

Digital Ocean does allow me to permanently or temporarily raise the resources of the virtual machine. To obtain 2 more cores (4) and 4x the memory (8GB) I would need to jump to the $80/month plan and adjust the NGINX and Node configuration to use the more cores/ram.

nodebench50i

If my app is profitable I can certainly reinvest.

Scale Out

With MongoDB clusters, I can easily clone ( shard ) a cluster and gain extra throughput if I need it, but with 0.17% of my existing cluster being utilised I should focus on moving servers closer together.

NGINX does have commercial level products that handle scalability but this costs thousands. I could scale out manually by setting up a Node Proxies to point to multiple servers that receive parent calls. This may be more beneficial as Digital Ocean servers start at $5 a month but this would add a whole lot of complexity.

Cache Solutions

  • Nginx Caching
  • OpCache if you are using PHP.
  • Node-cache – In memory caching.
  • Redis – In memory caching.

Monitoring

Monitoring your server and resources is essential in detecting memory leaks and spikes in activity. HTOP is a great monitoring tool from the command line in Linux.

http://pm2.keymetrics.io/ is a good node package monitoring app but it does go a bit crazy with processes on your box.

CPU Busy

Communication

It is a good idea to inform users of server status and issues with delayed queries and when things go down inform people early. Update: Article here on self-service status pages.

censisfail

The Future

UPDATE: 17th August 2016

I set up an Amazon Web Services ECS server ( read AWS setup guide here ) with only 1 CPU and 1GB ram and have easily achieved 700 concurrent connections.  That’s 41,869 geo queries served a minute.

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

AWS MongoDB Test

The MongoDB Cluster CPU was 25% usage with  200 query opcounters on each secondary server.

I think I will optimize the AWS OS ‘swappiness’ and performance stats and aim for 2000 queries.

This is how many hits I can get with the CPU remaining under 95% (794 geo serves a second). AMAZING.

AWS MongoDB Test

Another recent benchmark:

AWS Benchmark

UPDATE: 3rd Jan 2017

I decided to ditch the cluster of three AWS servers running MongoDB and instead setup a single MongoDB instance on an Amazon t2.medium server (2 CPU/4GB ram) server for about $50 a month. I can always upgrade to the AWS MongoDB cluster later if I need it.

Ok, I just threw 2000 concurrent users at the new AWS single server MongoDB server and the server was able to handle the delivery (no dropped connections but the average response time was 4,027 ms, this is not ideal but this is 2000 users a second (and that is after API handles the ip (banned list), user account validity, last 5 min query limit check (from MySQL), payload validation on every field and then  MongoDB geo query).

scalability_budget_2017_001

The two cores on the server were hitting about 95% usage. The benchmark here is the same dataset as above but the API has a whole series of payload, user limiting, and logging

Benchmarking with 1000 maintained users a second the average response time is a much lower 1,022 ms. Honestly, if I have 1000-2000 users queries a second I would upgrade the server or add in a series of lower spec AWS t2.miro servers and create my own cluster.

Security

Cheap may not be good (hosting or DIY), do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added self-service status page info and Vultr info

Short: https://fearby.com/go2/scalability/

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Scalability, Scalable, Security, ssl, VM Tagged With: api, digital ocean, mongodb, scalability, server

Quick guide to using Adobe Premiere Pro CC to make videos.

June 30, 2016 by Simon Fearby

This is a simple guide for using Adobe Premiere Pro to edit simple videos.

Adobe used to sell a Master Collection of all Adobe software that cost thousands of dollars. Adobe have moved to a monthly subscription model for single or bundled software packages. Goto http://www.adobe.com/au/creativecloud.html and review the software options.

Adobe Photoshop is a must for editing Photographs and Adobe Premiere Pro is a must for video editing.

Adobe CC

You will need to choose your Adobe Software Package or Bundle (above) and follow the prompts to purchase it. You will be required to link the software to an Adobe ID (Adobe Account). Once you link your Software to an Adobe ID you can login into any PC/Mac and install your software using the Adobe App Download Utility.

Fyi: If you can buy a subsidised 1 year subscription of Creative Cloud at your work you will need to..

  • 1: Buy the 1 year redemption from your work.
  • 2: Create an Adobe account.
  • 3: Redeem the subscription.
  • 4: Download the apps with the Adobe Download Manager app.

The Adobe Download manager will allow you to download or update any of your purchased software packages

Update installed apps

cc_1

Download new apps.

Adobe Creative Cloud

Once you have installed Adobe Premiere Pro you can open it.

premiere_pro_01

Before you start.

  • Have you recorded all of your footage?
  • Do you have at least 20GB free space?
  • Save your videos to a folder that will not be moved later as Premier Pro links to the videos where they reside are and moving the videos will trigger you to re-link them.
  • Do you have a good story to tell?

Bookmark the Adobe Premiere Pro CC help pages here https://helpx.adobe.com/premiere-pro.html and visit YouTube and search for Adobe premiere Pro CC Tutorial.

Adobe Premiere Pro user Interface Basics.

I created a new project called “snow” and will save the project in the default place. Select “HDV” for High Definition. Click on the scratch disc tab.

premiere_pro_02

Adobe Premiere Pro will use these locations to copy andy of the videos or audio instead of touching the original videos. You can delete cached content later (these folders will grow a lot).

premiere_pro_03

The first thing you need to learn is how to reset your workspace. Go to the Window then Workspaces the Editing menu to reset your view.

premiere_pro_04

Click the “Project: Snow” tab in the lower left-hand side to get ready to start importing videos.

You could just start dragging and dropping in dozens of videos but I would suggest you create a number of folders (called “bins”) to store your video and other media assets into.

premiere_pro_05

I will create 4 folders (bin’s) called:

  • 01. On our way to the snow
  • 02. Above Sheba Dam
  • 03. Near Ponderosa
  • 04. Misc

Now I can drag the videos (and pictures) into each bin folder.

premiere_pro_06

Once you have filled each folder take some time reviewing each folder and think about what story you can create. I am going to create a fun mash-up for the day in order.

Creating My Snow Video

First, I double click on my “01. On our way to the snow” bin folder and drag the video onto the timeline sequence.

premiere_pro_07

Hopefully, you can now see your video in the timeline

premiere_pro_08

Shortcut Keys:

  • SPACEBAR plays and pauses the clip.
  • MINUS will Zoom Out of the timeline.
  • PLUS will Zoom Into the timeline.
  • BACKSPACE will zoom out and show all of your timeline.

More shortcuts keys here.

If you were skilled at recording you can just add more video to the timeline and be finished in no time but we have loads of editing and re arranging to do.

Adding a Title

Move your playback cursor to the start pf the video (press HOME ) and go to the Title, New Title then Default Still to start creating a new title.

premiere_pro_09

If you ever used Photoshop this will be very familiar. Close the title screen to save it.

premiere_pro_10

Now I dragged and dropped the timeline above the existing video track. I moved the driving time-lapse to the right and dragged a photo before the time-lapse. Now you can see how you can mix together Videos, Picture and text.

premiere_pro_11

Before I get too carried away I am going to source a background soundtrack from https://www.youtube.com/user/NoCopyrightSounds/videos I am going to choose this one https://www.youtube.com/watch?v=UkUweq5FAcE

If I have enough footage I like to have my videos match the audio where possible for greater impact.

Basic Editing Tools

These are the tool’s that you will use 99% of the time.

premiere_pro_12

If you want to cut a video in half use the Razor tool, if you want to move or interact with clips switch back to the selection tool.

Cutting, Trimming and Cropping Videos and audio

Move the selection tool to the end of a video and click and drag when you see this icon to truncate the length of a video without cutting it.

premiere_pro_13

Move the selection tool to the start of a video and click and drag when you see this icon to truncate the length of a video without cutting it

premiere_pro_14

You could cut a video to size but truncate it (like above) will allow you to extend the clip again without rejoining the video sections.

Layers

Adobe Premiere Pro (like Photoshop) allows you to have layers of videos to allow you to show a “picture in picture” or (picture in picture in picture).

premiere_pro_15

Building your Video.

Now comes the time-consuming bit of finding each video to insert, placing it, trimming it, cutting it and or moving it. This can take hours or days (depending on how fussy you are).

Caching your files.

Adobe Premiere Pro likes to cache your files to speed up the preview and export process. The green, yellow and red lines indicate what parts of your timeline are cached.

premiere_pro_16

Exporting your Video

premiere_pro_17

Exports can take up to 1 hour per 5 minutes on a 3 year old computer.

My VLOG Trip to the Snow video.

My 1st VLOG video.

Tutorials to Watch:

  • Create a Ken Burns Effect in Premiere Pro CC
  • Adobe Premiere Pro CC – Editing 101: Basic Audio (Part 3)
  • Episode 19 – Adding Video and Audio Transitions – Tutorial for Adobe Premiere Pro CC 2015

If you need more help consider buying a professional Adobe Premiere Pro CC course from Udemy.com or search YouTube for Adobe Premiere Pro CC Tutorials

The Adobe Help site is great too: https://helpx.adobe.com/premiere-pro.html

Good luck and let me know what you create.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Apple, Cloud, Video Editing Tagged With: adobe, editing, premiere, pro, Video

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2022 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT