• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Development

How to get useful feedback for your ideas

July 16, 2017 by Simon

I have witnessed investors are great at listening to pitches for the next best thing in order to snap up an investment offer. IMHO: Investors are worried about two things:

  • A) will this make money (with little effort from me)?
  • B) will this increase my success stats?

Feedback from Legal

Depending on your app idea you may need to get feedback from a legal expert.  It as always a good idea to not start building an app until you have validated your idea with potential paying customers, it is also a good idea to seek legal advice too. Only obtaining advice from a startup investor or agent may not be a good idea, ensure that your legal representative knows what they are talking about and has experience.

Feedback from Investors

Investors are very experienced but won’t necessarily give feedback like the Dragons Den or Shark Tank.

Feedback from Customers

Feedback from customers can also be dangerous as Trades Cloud found out.

Feedback from a Startup Mentor (board members) 

Me experience with mentors is they are like investors but only give advice in what they know. In my case feedback was all negative, they did not want to see what I had done, they advised don’t chase small profits (goes against: “small profit, large turnover”) and you should stop what you are doing, and chase after large profit. Also, I was advised to think about capital cities and not regions.

If you are only after money I would suggest you talk to an investor or startup mentor.

Project Types

Before you ask for or listen to feedback, are you developing a small product or a product and a startup, as the feedback you will receive will be quite different?  A smaller product may need feedback from customers but a start-up idea will need more thorough checks and idea validations.

I’d recommend you read this page on How To Start a Startup – Infographic if you want to create a startup.

This info-graphics was created by Anna Vital (based on an essay from by Paul Graham).

Feedback for a smaller idea

As a developer, you may want to get external feedback on your work, ideas or project. All feedback is good, right? This is what I have learned (so far).

The best feedback you can get is before development starts by talking to potential customers and end-users during alpha and beta testing.

Positive Feedback

  • Feedback can give you a fresh perspective and make you think.
  • Feedback can save you money.
  • Feedback can be free.

Negative Feedback

  • Don’t be disheartened when you hear negative feedback. All advice is good advice and it is up to you to contextualise and prioritise the feedback into tasks or actions.
  • People will tell you this has already been done already (after a quick Google but on detailed inspection this may not be the case).
  • Negative feedback can seed doubts.

Do or Don’t

Google is used by many for finding medical advice and it can also be used for finding out if a product or services exist already. But Google may turn you away from doing something. GoogleWhacking is a game where you try and find one search result with only two keywords entered into Google.

I have skipped countless cloud providers but I chose one based on what was important to me (value, performance, and ease of use). Product ideas are similar, it’s all in the execution and iterations.

Remember there was a social media platform called GeoCities before MySpace, did that stop Facebook? Social media, products, and services will come and go (remember Kodak and Blockbuster).

If in doubt find a good mentor.

Idea Validation

Good feedback should hurt and should make you think. Here is a good list of 232 failed start-up post mortems (I listed my favourite ones here).

I like Backblaze post on getting your first 1,000 customers.

Validate your ideas with customers and prioritize paying customers to feedback over free feedback. Providing polls asking what the future features should be is a good idea once you have 100 customers or more.

Pitches

If you are developing an idea be prepared to pitch your idea as the project grows. Pitches will succeed if you tick the listener’s boxes and don’t be concerned if pitches end in silence.

Ignoring Negative advice

People who give negative feedback may not know all of the details and will certainly forget the giving feedback hours later so don’t take negative feedback to heart.

I’d certainly listen to advice when it comes to money though.

People usually give feedback on what they know so don’t be too concerned if feedback is not what you expect.

Investors

Investors hate products that exist or may partially exist so do your research and explain how your solution solves existing problems or is better.

Do competitor or product checks and validate those problems beforehand.

Research

You have not thought of everything, again here is a good list of 232 failed start-up post mortems (I listed my favourite ones here).

Ensure you are delivering what people want (and the solution does not exist already).

Monetization is not everything

It is a common joke that “Investors make money, idea owners pay tax (and wages)”.

Rob Stitch (comedian) commented on the radio recently about the awesome Clarke and Dawes comedy sketch about a public bath that was questioned for not making money, the comedic punchline was “bad news for footpaths.”

Some things are more important than the costs if the need is high enough. But don’t burn a heap of money on silly ideas, aim small and scale up and iterate in public, get your feet wet early and iterate in public.

That’s easy for me to say but I would rather create stuff in my free time instead of burning money and time on other hobbies.

Be wary of people who only want to be involved in big money projects, want slices of your idea or be paid advisors. What motivates you may not motivate them.

How to get the feedback you need

Avoid people that do not listen and go into Yoda/advice mode before getting all the facts (especially if they were paid for giving advice).

How to use feedback you receive

Do prioritise advice from potential customers over all others. Atlassian blog gives tips using Jira to input feedback from customers in an agile project.

Park all feedback into a project task tracking software package like Trello or Jira (see my guide here). Ensure advice (and solutions) align with the initial problems and value. Additional advice may assist in obtaining VC capital or entering the startup building cycle but not help your initial customers problems. Feedback about higher monetization strategies may sound good to an Investor but would not help smaller customers with planned ideas.

Additional advice may assist in obtaining VC capital or entering the startup investor attracting cycle but not help your initial customers problems. Feedback about bigger monetization plans may sound good to an Investor but would not help existing customers with planned ideas.

Talk to the right people 

  • Talk to an investor if you need cash.
  • Talk to an end-user if you need feedback related to your product.
  • Talk to a developer if you need developer-related advice.

You will receive feedback about the things that people know.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V 1.7 added Legal Info

Short Link: https://fearby.com/go2/feedback/

Filed Under: Advice, Atlassian, Development, Feedback, Investor, JIRA Tagged With: create, idea

How to develop software ideas

July 9, 2017 by Simon

I was recently at a public talk by Alan Jones at the UNE Smart Region Incubator where Alan talked about launching startups and developing ideas.

Alan put it quite eloquently that “With change comes opportunity” and we are all very capable of building the next best thing as technological barriers and costs are a lot lower than 5 years ago but Alan also mentioned 19 start-ups-ups fail but “if you focus on solving customer problems you have a better chance of succeeding”. Regions need to share knowledge and you can learn from other peoples mistakes.”

I was asked after this event to share thoughts on “how do I learn to develop an app” and “how do you get the knowledge”. Here is my poor “brain dump” on how to develop software ideas (It’s hard to condense 30 years experience developing software). I will revise this post over the coming weeks so check back often.

If you have never programmed before check out this programming 101 guides here.

I have blogged on technology/knowledge things in the past at www.fearby.com and recently I blogged about how to develop cloud-based services (here, here, here, here and here) but this blog post assumes you have a validated “app idea” and you want to know how to develop yourself. If you do not want to develop an app yourself you may want to speak with Blue Chilli.

Find a good mentor.


True App Development Quotes

  • Finding development information is easy, following a plan is hard.
  • Aim for progress and not perfection.
  • Learn one thing at a time (Multitasking can kill your brain).
  • Fail fast and fail early and get feedback as early as possible from customers.
  • 10 engaged customers are better than 10,000 disengaged users.

And a bit of humour before we start.

Project Mangement Lol

(click for larger image)

Here is a funny video on startup/entrepreneur life/lingo


This is a good funny, open and honest video about programming on YouTube.

Follow Seth F Samuel on twitter here.

Don’t be afraid to learn from others before you develop

My fav tips from over 200 failed startups (from https://www.cbinsights.com/blog/startup-failure-post-mortem/ )

  • Simpler websites shouldn’t take more than 2-3 months.You can always iterate and extrapolate later. Wet your feet asap
  • As products became more and more complex, the performance degrades. Speed is a feature for all web apps. You can spend hundreds of hours trying to speed of the app with little success. Benchmarking tools incorporated into the development cycle from the beginning is a good idea
  • Outsource or buy in talent if you don’t know something (e.g marketing). Time is money.
  • Make an environment where you will be productive. Working from home can be convenient, but often times will be much less productive than a separate space. Also it’s a good idea to have separate spaces so you’ll have some work/life balance.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • We most definitely committed the all-too-common sin of premature scaling. Driven by the desire to hit significant numbers to prove the road for future fundraising and encouraged by our great initial traction in the student market, we embarked on significant work developing paid marketing channels and distribution channels that we could use to demonstrate scalable customer acquisition. This all fell flat due to our lack of product/market fit in the new markets, distracted significantly from product work to fix the fit (double fail) and cost a whole bunch of our runway.
  • If you’re bootstrapping, cash flow is king. If you want to possibly build a product while your revenue is coming from other sources, you have to get those sources stable before you can focus on the product.
  • Don’t multiply big numbers. Multiply $30 times 1.000 clients times 24 months. WOW, we will be rich! Oh, silly you, you have no idea how hard it is to get 1.000 clients paying anything monthly for 24 months. Here is my advice: get your first client. Then get your first 10. Then get more and more. Until you have your first 10 clients, you have proved nothing, only that you can multiply numbers.
  • Customers pay for information, not raw data. Customers are willing to pay a lot more for information and most are not interested in data. Your service should make your customers look intelligent in front of their stakeholders. Follow up with inactive users. This is especially true when your service does not give intermediate values to your users. Our system should have been smarter about checking up on our users at various stages.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.

Here are my tips on staying on track developing apps. What is the difference between a website, app, API, web app, hybrid app and software (my blog post here)?

I have seen quite a few projects fail because:

  • The wrong technology was mandated.
  • The software was not documented (by the developers).
  • The software was shelved because new developers hated it or did not want to support it.

Project Roles (hats)

It is important to understand the roles in a project (project management methodology aside) and know when you are being a “decision maker” or a “technical developer”. A project usually has these roles.

  • Sponsor/owner (usually fund the project and have the final say).
  • Executive/Team leader/scrum master (manage day to day operations, people, tasks and resources).
  • Team members (UI, UX, Marketers, Developers (DevOps, Web, Design etc) are usually the doers.
  • Stakeholders (people who are impacted (operations, owners, Helpdesk)).
  • Subject Matter Experts (people who should guide the work and not be ignored).
  • Testers (people who test the product and give feedback).

It can be hard as a developer to switch hats in a one-person team.

How do you develop and gain knowledge?

First, document what you need to develop (what problem are you solving and what value will your idea bring). Does this solution exist already? Don’t solve a problem that already exists.

Developing software is not hard, you just need to be logical, research, be patient and follow a plan. The hardest part can be gluing components together.

I like to think of developing software like making a car if you need 4 wheels do you have 4 wheels? If you want to build it yourself and save some money can you make wheels (make rubber strips with steel reinforced/vulcanized rubber, make alloys and add bearings and have them pass regulations) or should you buy wheels (some things are cheaper to make than other things)? Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks.  Developing software can lead you down a rabbit hole of endless research, development, and testing if you don’t know what you are doing.

Examples 1:

I “need a webpage”:

  • Research: Will Wix, Shopify or a hosted WordPress website do (is it flexible or cheap enough) or do I install WordPress (guide here) or do I  learn and build an HTML website and buy a theme and modify it (and have a custom/flexible solution)?

Example 2:

I “need an iPhone and Android app”:

Research: You will need to learn iOS and Android programming and you may need a server or two to hold the apps data, webpage and API. You will also need to set up and secure the servers or choose to install a database or go with a “database as a service” like cloud.mongodb.com or google firebase.

Money can buy anything (but will it be flexible/cheap enough), time can build anything (but will it be secure enough).

Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks but developing software can lead you down a rabbit hole of endless research, development and testing if you don’t know what you are doing.

Almost all systems will need a central database to store all data, you can choose a traditional relational SQL database or a newer NoSQL database. MySQL is a good/cheap relational SQL database and MongoDB is a good NoSQL database. You will need to decide on how your app talks to the database (directly or via an API (protected by OAuth or limited access tokens)).  It is a bad idea to open a database directly to the world with no security. Sites like www.shodan.io will automatically scan the Internet looking for open databases or systems and report this as an insecure site to anyone. It is in your interest to develop secure systems in all stages of development.

CRUD (Create, Read, Update and Delete) is a common group of database tasks that you can do to prove you can read, write, update and delete from a database. While performing CRUD operations is a good to benchmark to also see how fast the database it.  if a database is the slowest link then you can use memory to cache database values (read my guide here). Caching can turn a cheap server into a faster server. Learning by doing can quickly build skills so “research”, “do” and “learn”.

Most solutions will need a website (and a web server). Here is a good article comparing Apache and Nginx (the leading open source web servers).

Stacks and Technology – There are loads of development environments (stacks), frameworks and technologies that you can choose. Frameworks supposedly make things easier and faster but frameworks and technologies change (See 2016 frameworks to learn guide and 2017 frameworks to learn guide) frequently (and can be abandoned). Frameworks supposedly make things easier and faster but be careful most frameworks run 30% slower than raw server-side and client code. I’d recommend you learn a few technologies like NGINX, NodeJS, PHP and MySQL and move up from there.

The Mean Stack is a  popular web development platform (MEAN = MongoDB, ExpressJS, Angular and NodeJS.).

Apps can be developed for Apple platforms by signing up here (about $150 AUD a year) and using the XCode IDE. Apps can be developed for the Android Platform by using Android Studio (for about $20 (one-off fee)). Microsoft has a developer portal for the Windows Platform. Google also has an online scalable database as a service called Firebase. If you look hard enough you will find a service for everything but connecting those services can be timely, costly or make security and a scalable solution impossible so beware of using as-a-service platforms. I used the Corona SDK to develop an app but abandoned the platform due to changes in the vendor’s communication and enforced policies.

If you are not sure don’t be afraid of ask for help on Twitter.

Twitter is awesome for finding experts

Recent twitter replies to a problem I had.

Learning about new Technology and Stacks

To build the knowledge you need to learn stuff, build stuff, test (benchmark), get feedback and build more stuff. I like to learn about new technology and stacks by watching Udemy courses and they have a huge list of development courses (Web Development, Mobile Apps, Programming Languages, Game Development, Databases,  Software Testing,  Software Engineering etc).

I am currently watching a Practical iOS 11 course by Stephen DeStefano on Udemy to learn about unreleased/upcoming features on the Apple iPhone (learning about XCode 9, Swift 4, What’s new in iOS 11, Drag and drop, PDF and ARKit etc).

Udemy is awesome (Udemy often have courses for $15).

If you want to learn HTML go to https://www.w3schools.com/.

https://devslopes.com/have a number or development related courses and an active community of developers in a chat system.

You can also do formal study via an education provider (e.g. Bachelor of computer sciences at UNE or Certificate IV in programming or Diploma in Software Development at TAFE).

I would recommend you use Twitter and follow keywords (hashtags) around key topics (e.g #www, #css, #sql, #nosql, #nginx, #mongodb, #ios, #apple, #android, #swift, #objectivec, #java, #kotlin) and identify users to follow. Twitter is great for picking up new information.

I follow the following developers on YouTube (TheSwiftGuy, AppleProgrammer, AwesomeTuts, LetsBuildThatApp, CodingTech etc)

Companies like https://www.civo.com/ offer developer-friendly features with hosting, https://www.pebbled.io/ offer to develop for you and https://serverpilot.io/ help you spin up software on hosting providers.

What To Develop

First, you need to break down what you need. (e.g ” I want an app for iOS and Android in 5 months that does XYZ. The app must be secure and be fast. Users must be able to register an account and update their profile”).

Choosing how high to ensure your development project scales depends on your peak expected/active concurrent users (ratio of paying and free users). You can develop your app to scale very high but this may cost more money initially, it can be bad to pay to ensure scalability early. As long as you have a good product and robust networking/retry routines and UI you don’t need to scale high early.

Once you know what you need you can search the open-source community for code that you can use. I use Alamofire for iOS network requests, SwiftyJSON for processing JSON data and other open-source software. The only downside of using open source software is it may be abandoned by the creators and break in the future. Saving your time early may cost you time later.

Then you can break down what you don’t want. (e.g “I don’t want a web app or a windows phone or windows desktop app”). From here you will have a list of what you need and what you can avoid.

You will also need to choose a project management methodology (I have blogged about this here). Having a list of action item’s and a plan and you can work through developing your app.

While you are researching it is a good idea to develop smaller fun projects to refine your skills.  There are a number of System Development Life Cycles (SDLC’s) but don’t worry if you get stuck, seek advice or move on. It is a  good idea to get users beta testing your app early and seek feedback. Apple has the TestFlight app where you can send beta versions of apps to best testers. Here is a good guide on Android beta testing.

If you are unsure about certain user interface options or features divide your beta testers and perform A/B or split testing to determine the most popular user interfaces. Capturing user data and logs can also help with debugging and user usage actions.

Practice

Develop smaller proof of concept apps in new technologies or frameworks and you will build your knowledge and uncover limitations in certain frameworks and how to move forward with confidence. It is advisable to save your source code for later use and to share with others.

I have shared quite a bit of code at https://simon.fearby.com/blog/ that I refer to from time to time. I should have shared this on GitHub but I know Google will find this if people want it.

Get as much feedback as you can on what you do and choose (don’t trust the first blog post you read (me included)).

Most companies offer Webinars on their products. I like the NGINX webinars. Tutorialspoint have courses on development topics. Sitepoint is a  good development site that offers free books, courses, and articles. What are API’s information by Programmable web.

You may want to document your application flow to better understand how the user interface works.

Useful Tools

Balsamic Mockups and Blueprint are handy for mocking up applications.

C9.io is a great web-based IDE that can connect to a VM on AWS or Digital Ocean.  I have a guide here on connecting Cloud 9 to an AWS VM here.

I use the Sublime Text 3 text editor when editing websites locally.

(image courtesy of https://www.sublimetext.com/ )

I use the Mac Paw app to help test API’s I develop locally.

(image courtesy of https://paw.cloud )

Snippets is a great application for the Mac for storing code snippets.

I use the Cornerstone Subversion app for backing up my code on my Mac.

Webservers: https://www.iis.net/IIS Webserver, NGINX Webserver, Apache Webserver.

NodeJS programming manual and tutorials.

I use Little Snitch (guide here) for simulating network down in app development.

I use the Forklift file manager on OSX.

Databases: SQL tutorials, NoSQL Tutorials, MySQL documentation.

Siege is a command-line HTTP load testing tool.

CPU Busy

http://loader.io/ is a nice web-based benchmarking tool.

Bootstrap is an essential mobile responsive framework.

Atlassian Jira is an essential project tracking tool. More on Agile Epics v Stories v Tasks on the Atlassian community website here. I have a post on developing software and staying on track here using Jira.

Jsfiddle is a good site that allows you to share code you are working on or having trouble with.

Dribbble is a “show and tell” site for designers and creatives.

Stackoverflow is the go-to place to ask for help.

Things I care about during development phases.

  • Scalability
  • Flexibility
  • Risk
  • Cost
  • Speed

Concentrating too much on one facet can risk exposing other facets. Good programmers can recommend a deliver a solution that can be strong in all areas ( I hate developing apps that are slow but secure or scalable and complex).

Platforms

You can signup for online servers like Azure, AWS (my guide here) or you can use a cheaper CPanel based hosting. Read my guide on the costs of running a cloud-based service.

Use my link to get a free Digital Ocean server for two months by using this link. Read my blog post here to help setup you VM. You can always use Ubuntu on your local machine to use Ubuntu (read my guide here). Don’t forget to use a GIT code repository like GitHub or Bitbucket.

Locally you can install Ubuntu (developers edition) and have a similar environment as cloud platforms.

Lessons Learned

  • Deploy servers close to the customers (Digital Ocean is too far away to scale in Australia).
  • Accessibility and testing (make things accessible from the start).
  • Backup regularly (Use GIT, backup your server and use Rsync to copy files to remote servers and use services like backblaze.com to backup your machine).
  • Transportability of technology (Use open technology and don’t lock yours into one platform or service).
  • Cost (expensive and convenient solutions may be costly).
  • Buy in themes and solutions (wrapbootstrap.com).
  • Do improve what you have done (make things better over time). Thing progress and not perfection.

There is no shortage of online comments bagging certain frameworks or platforms so look for trends and success stories and don’t go with the first framework you find. Try candidate frameworks and services and make up your own mind.

A good plan, violently executed now, is better than a perfect plan next week. – General George S. Patton

Costs

Sometimes cost is not the deciding factor (read my blog post on Alibaba cloud). You should estimate your apps costs per 1000 users. What do light v heavy users cost you? I have a blog post on the approx cost of cloud services.  I started researching a scalable NoSQL platform on IBM Cloudant and it was going to cost $4,000 USD a month and integrating my own App logic and security was hard. I ended up testing MongoDB Cloud where I can scale to three servers for $80 a month but for now, I am developing my current project on my own AWS server with MongoDB instance. Read my blog post here on setting up MongoDB and read my blog post on the best MongoDB GUI.

Here is a great infographic for viewing what’s involved in mobile app development.

You can choose a number of tools or technologies to achieve your goals, for me it is doing it economically, securely and in a scalable way that has predictable costs. It is quite easy to develop something that is costly, won’t scale or not secure or flexible. Don’t get locked into expensive technologies. For example, AWS has a user pays Node JS service called Lambada where you get Million of free hits a month and then you get charged $0.0000002 per request thereafter. This sounds good but I prefer fixed pricing/DIY servers better as it allows me to build my own logic into apps (this is more important than scalability).

Using open-source software of off the shelf solutions may speed things up initially? Will It slow you down later though? Ensure free solutions are complete and supported and Ensure frameworks are helping. Do you need one server or multiple servers (guide on setting up a distributed MySQL environment )? You can read about my scalability on a budget journey here. You can speed up a server in two ways Scale Up (Add more Mhz or CPU cores) or scale-out (add more servers).

Start small and use free frameworks and platforms but have a tested scale-up plan, I researched cheap Digital Ocean servers and moved to AWS to improve latency and tested MongoDB on Digital Ocean and AWS but have a plan to scale up to cloud.mongodb.com if need be.

Outsource (contractors) 

Remember outsourcing work tasks (or complete outsourcing of development) can buy you time and or deliver software faster. Outsourcing can also introduce risks and be expensive. Ask for examples of previous work and get raw numbers on costs (now and in the future) and concurrent users that a particular bit of outsourcing work will achieve.

If you are looking to outsource work do look at work that the person or company has done before (if is fast, compliant, mobile scalable, secure, robust, backup up, do you have rights to edit/own and own the IP etc). I’d be cautious of companies who say they can do everything and don’t show live demos.

Also, beware of restrictions on your code set by the contractors. Can they do everything you need (compare with your list of Moscow must haves)? Sometimes contractors only code or do what they are comfortable with that can impact your deliverables.

Do use a private Git repository (that you own) like GitHub or BitBucket to secure your code and use software like Trello or Atlassian JIRA to track your project. Insist the contractors use your repository to retain control.

You can always sell equity in your idea to an investor and get feedback/development from companies like Bluechilli.

Monetization and data

Do have multiple monetization streams (initial app purchase cost, in-app purchase, subscriptions, in-app credit, advertising, selling code/components etc). Monthly revenue over yearly subscription works best to ensure cash flow.

Capture usage data and determine trends around successful engagement, Improve what works. Use A/B testing to roll out new features.

I like Backblaze post on getting your first 1,000 customers.

Maintenance, support risk and benefits

Building your own service can be cheaper but also riskier if you fail to secure an app you are in trouble if you cannot scale you are in trouble. If you don’t update your server when vulnerabilities come out you are in trouble. Also, Google on monetization strategies. Apple apps do appear to deliver more profits over Android. Developers often joke “Apple devices offer 90% of the profits and 10% of the problems and Android apps offer 90% of the problems and 10% of the profits”.

Also, Apple users tend to update to the latest operating system sooner where Android devices are rather fragmented.

Do inform you users with self-service status pages and informative error messages and don’t annoy users.

Use Free Trials and Credit

Most vendors have free trials so use them

https://aws.amazon.com/free/AWS have 12 month free tiers.

Use this link to get two months free with Digital Ocean.

Microsoft Azure also give away free credit.

Google cloud also have free credit.

Don’t be afraid to ask.

MongoDB Cloud also gives away free credit if you ask.

Security

Sites like Shodan.io will quickly reveal weaknesses in your server (and services), this will help you build robust solutions from the start before hackers find them. Read https://www.owasp.org/index.php/Main_Page to know h0w to develop secure websites. Listen to the SecurityNow podcast to learn how the technology works and is broken. Following TroyHunt is recommended to keep up to date with security in general. @0xDUDE is a good ethical hacker to follow to stay up-to date on security exploits also @GDI_FDN is a good non-profit organization that helps defend sites that use open source software.

White hack hackers exist but so do black hat ones.

Read the Open Web Application Security site here. Read my guide on setting up public key pinning in security certificates here.

I use the ASafaWeb site to test your sites from common ASP security flaws. If you have a secure certificate on your site you will need to ensure the certificate is secure and up to date with the SSL Labs SSL Test site.

SSL Cert

Once your websites IP address is known (get it from SSL Labs) run a scan over your site with https://www.shodan.io/ to find open ports or security weaknesses.

Shodan.io allows you and others to see public information about your server and services. You can read about well-known internet ports here.

Anyone can find your server if you are running older (or current) web servers and or services.

It is a  good idea to follow security researchers like Steve Gibson and Troy Hunt and stay up to date with live exploits. http://blog.talosintelligence.com is also a good site for reading technical breakdowns of exploits.

Networking

Do share and talk about what you do with other developers. You can learn a lot from other developers and this can save you loads of time and mistakes. True developers love talking about their code and solutions.

Decision Making

Quite a lot of time can be spent on deciding on what technology or platform to use, I decide by factoring in cost, risk and security over flexibility, support and scalability. If I need flexibility, lower support or scalability then I’ll choose a different technology/platform. Generally, technology can help with support. Scalable solutions need effort from start to finish (it is quite easy to slow down any technology or service).

Don’t be afraid to admit you have chosen the wrong technology or platform. It is far easier to research and move on than live with poor technology.

If you have chosen the wrong technology and stick with it, you (and others) will loath working with it (impacting productivity/velocity).  Do you spend time swapping technology or platforms now or be less productive later?

Intellectual property and Trademarks

Ensure you search international trademarks for your app terms before you start using them. The Australian ATO has a good Australian business name checker here.

https://namechk.com/ is also a good place to search for your app ideas name before you buy or register any social media accounts.

Using https://namechk.com/ you can see “mystartupidea” name is mostly free.

And the name “microsoft’ is mostly taken.

Seek advice from a start-up experts from https://www.bluechilli.com/ like Alan Jones.

See my guide on how to get useful feedback for your ideas here.

Tips

  1. Use Git Source Control systems like GitHub or Bitbucket from the start and offsite backup your server and environments frequently. Digital Ocean charges 20% of your servers costs to back it up. AWS has multiple backup offerings.
  2. Start small and scale up when needed.
  3. Do lots of research and test different platforms, frameworks, and technologies and you will know what you should choose to develop with.

(Image above found at http://startupquotes.startupvitamins.com/ Follow Startup Vitamins on Twitter here.).

You will know when you are a developer when you have gained knowledge and experience and can automatically avoid technologies that will not fit a  solution.

Share

Don’t be afraid to share what you know (read my blog post on this here). Sharing allows you to solidify your knowledge and get new information. Shane Bishop from EWWW Image Optimizer  WordPress plugin wrote Setting up a fast distributed MySQL environment with SSL for us. If you have something to share on here please let me know here on twitter.

It’s never too late to do

One final tip is knowledge is not everything, planning and research is key, a mind that can’t develop may be better than a mind that can because they have no experience (or baggage) and may find faster ways to do things. Thanks to http://zachvo.com/ for teaching me this during a recent WordPress re-deployment. Sometimes the simplest solution is.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

DRAFT: 1.86 added short link

Short: https://fearby.com/go2/develop/

Filed Under: Advice, Android, Apple, Atlassian, Backup, BitBucket, Blog, Business, Cloud, CoronaLabs, Cost, Development, Domain, Firewall, Free, Git, GitHub, Hosting, JIRA, mobile app, MySQL, Networking, NodeJS, OS, Project Management, Scalability, Scalable, Security, Server, Software, Status, Trello, VM Tagged With: ideas

Self Service Status Pages

June 12, 2017 by Simon

I am a big fan of companies having external (and internal) self-service status pages that list the statuses of application and services. If you have an online presence, are developing an API or service you should consider developing an automated status page to list your services statuses.
Currently, users can check if a website is down by visiting sites like  http://downforeveryoneorjustme.com/

An app with multiple/ secure back ends will be harder for customers to diagnose if they are down so offering inbuilt status screens is essential. It is a good idea to create a dedicated system status page (e.g https://status.youproduct.com) and have that page show various statuses from a separate server or the same server, you don’t need a dedicated subdomain (a subfolder will do). Apple and Google use subpage status pages.

A good status page will show the status of services you offer. e.g.

  • Online shopping cart: UP
  • Online forum: UP
  • Payment Gateway: UP
  • SMS gateway: UP (Resolved connection issue 12 mins ago).
  • App API: DOWN (expected restoration in 12 mins).

If things are down it is good idea to add a balloon message or alert to live systems (and link to your status page), not everyone remembers maintenance windows or keeps up to date.

The status page can also contain other data that may help internal teams diagnose faults like:

  • Server Room Air conditioner temperature: 39c.
  • Room temperature: 41c.
  • Floor water sensor #1: TRUE.
  • Floor water sensor #2: FALSE.
  • Humidity: 89%.
  • Secure Server room 001 photos ( link 1, link 2)
  • AD server: UP.
  • DNS: UP.
  • Server Rack 001 Intake Temperature: 38c.
  • Server Rack 001 Internal Temperature: 78c.
  • Server Rack 001 External Temperature: 64c.

 

Pro Active v Reactive monitoring

It is a good idea to proactively detect and automatically remediate issues before you are forced to reactively resolve something. Don’t rely on an email from a monitoring service saying your server is down (or was down) or for a user to report an issue (users will often sit back and use the outage to do something else (this affects your service reputation and tusks your business)).

I would monitor in this order.

  1. External HTTP checking (External monitor checking your server).
  2. External Application checking (external verification of logins or application services).
  3. Internal Server stats (network link up status, link speed, network connections and network failure rates. A status screen can be easily built importing server stats and server performance).
  4. Known historical issues (monitor what has caused your sites to break before).
  5. Data from applications (historical patterns or known triggers).
  6. User Error Reports

Waiting for user users to report errors is bad. Sites like www.trello.com and www.onesignal.com have good programmable services like web push, mobile push and or phone and SMS alerts that can be connected into your support processes.

Performance

Showing current service performance and endpoint status allow your customers to set their expectations and this shows you take your services uptime seriously.

Data

If you have logs or data available from applications you may as well automate and summarize it. “Without data, you’re just another person with an opinion.” – W. Edwards Deming

Ignoring data and not reporting issues is a recipe for poor service.

Archiving multiple data points 

It is a good idea to log and archive network usage, service CPU, and usage (app, web server, I/O etc) to allow you to find correlation data and failure points. The analysis is key.

ETA’s

Do provide ETA’s on resolutions when things fail as you resolve an issue.

Maintenance

Listing planned and scheduled maintenance (e.g code rollouts, server reboots etc) allow you prevent support calls.

Automation

You can automate many things from a status page if a certain event happens you can attempt an automated resolution (e.g reboot a server) or let diagnosing staff know a resolution has happened.

You can automatically change email autoresponder text (mentioning things are down) when you reply to incoming emails, tickets and or automatically post status changes to social media. Automatically informing users (instead of ignoring and burying problems) this goes a long way to building trust.

You can automate the notifications of potential problems to internal staff from the status page and automatically inform key staff when certain things happened (e.g when say secure certificates will expire, when the network or API is overloaded or network is congested).

Information Validity

A good status page will list when the status was last updated (e.g. 3 minutes ago).

Statistics and Graphs

Statistics like up time and historical graphs (uptime and latency) can be good to help keep track of the reliability trends.

Inform your user when everything is ok.

Don’t forget to inform staff when everything up is, generally, staff will stop using a product or service until a system is back up. Generally, users will not sit there pressing refresh for long. Offer web push or RSS feeds.

Internal considerations

Improve your documentation, having good documentation (and known past problems and resolutions) on hand will allow for quicker resolutions in future.

Followers

Allows customers to subscribe to status changes (via RSS) or use dedicated status accounts on social media.  Providing a JSON feed also shows your commitment to openness to your service.

Adding website headers to inform users of upcoming outages is a good idea. The Department it Industry, Innovation and Science do it right.


Social Media

You should also setup social media status accounts and pin status information like civocloud do


History

Allow customers to see your past problems (description, date, time and resolution), allows the customer to know the risks and allows you to focus on remediation.

Example Status Pages

Notify users

No one wants to look bad but tell users when things are down but let the users opt out of notices.

Status Page ( systems, validity, ticket )

https://www.apple.com/au/support/systemstatus/

AWS Status Page ( history, more, regions, subscribe, validity ).

https://status.aws.amazon.com/

Digital Ocean Status Page ( history, description, and resolution ).

https://status.digitalocean.com/

Use this links to get a free server for 2 months.

Rack Space Status Page ( general notices, current status, maintenance ).

https://rackspace.service-now.com/system_status/

Heroku Status Page ( history, apps, tools, services, subscribe ).

https://status.heroku.com/

Discord Status Page ( services, history ).

https://status.discordapp.com/

Google Cloud Status Page ( history, description, and resolution ).

https://status.cloud.google.com/

Shopify Status Page ( response times, services, validity, subscribe, history ).

https://status.shopify.com/

Playstation Status Page ( services ).

https://status.playstation.com/en-au/

Github Status Page ( validity, response time, history ).

https://status.github.com/

Vultr Status Page

https://www.vultr.com/status/

Team Viewer Status Page  ( validity, services, history, subscribe ).

https://status.teamviewer.com/

Office 365 Status Page ( services ).

https://portal.office.com/servicestatus

G Suite Status Page ( services, history ).

http://www.google.com.au/appsstatus#hl=en-GB&v=status

Telstra Status Page ( Status, web page )

http://servicestatus.telstra.com/

Commercial Status Page Services

If you are not into developing a custom status page you can use a commercial status page service (but they are expensive)

e.g https://www.statuspage.io – $49 a month ( Atlassian owned ). I’d rather develop my own status page on a $2.5/m Vultr server with a LetsEncrypt SSL certificate.

Sites like Cloudflare offer auto failover and load balancing features for your site.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.7 added auto failover features

Filed Under: Development, Marketing, Status, Tech Advice

Digital disruption or digital tinkering

December 20, 2016 by Simon

The biggest buzzwords used by prime ministers, presidents or management these days has been “Innovation” and “digital disruption”.  As a developer or manager do you understand what goes into a new digital customer-focused service like an API or data-driven portal? How well is your business or products doing in the age of innovation and digital disruption? Do you listen to what your customers want or need?

When to Pivot

There comes a time when businesses realize they need to pivot in order to stay viable.

  • People don’t rent VHS movies they download movies from the Internet
  • Printing photographs, who does that anymore?
  • People learn from videos on YouTube, Khan Academy for free or pay for courses from, Pluralsight.com, Coursea.org, Udemy.com of Linda.com.
  • Information is wanted 24/7 and a call to customer service if information cannot be sourced online.

kodak-bankruptcy

Image source.

Chances are 90% of your customers are using mobile or tablet devices on any given day.  If you are not interacting with your customers via personalized/mobile technology prepare to be overtaken as a business.

Pivoting may require you to admit you are behind the eight ball and take a risk and set up a new customer-focused web portal, API, app or services.  Make sure you know what you need before the lure of services, buzzwords and  “shiny object syndrome” from innovation blog posts and consultants take hold.

Advocates and blockers of change

Creating change in an organization that bean counts every dollar, exterminates all risk and ignore ideas is a hard sell. How do you get support from this with power and endless rolls of red tape?

Bad reasons for saying no to innovation:

  • You can’t create a mobile app to help customers because the use of our logo won’t be approved.
  • Don’t focus on customer-focused automation, analytics and innovation because internal manual processes need attention first.
  • Possible changes in 2 years outside of our control will possibly impact anything we create.
  • Third eyelids.

Management’s support of experimentation and change is key to innovation. HARVARD BUSINESS REVIEW have a great post on this: The Why, What, and How of Management Innovation.

Does your organization value innovation? This is possibly the best video that describes how the best businesses focus on innovation and take risks. Simon Sinek: How great leaders inspire action.

Here is a great post on How To Identify The Most Dangerous Person In Your Company who blocks innovation and change.

Also a few videos on getting staff on board and motivation and productivity.

Project Perspectives
consultant_001

Project Focus

  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.
  • Focus on customer requirements and what you need to be doing and ignore the tech frameworks/language/features/services.

I said that three times (because it is important).

Before you begin coding, learn from those who have failed

Here are some of the best tips I have collected from start-ups who have failed.

  • We didn’t spend enough time talking with customers and we’re rolling out features that I thought were great, but we didn’t gather enough input from clients. We didn’t realize it until it was too late. It’s easy to get tricked into thinking your thing is cool. You have to pay attention to your customers and adapt to their needs.
  • The cloud is great. Outsourcing is great. Unreliable services aren’t. The bottom line is that no one cares about your data more than you do – there is no replacement for a robust due diligence process and robust thought about avoiding reliance on any one vendor.
  • Your heart doesn’t get satisfied with any levels of development. Ignore your heart. Listen to your brain.
  • You can always iterate and extrapolate later. Wet your feet asap.
  • As the product became more and more complex, the performance degraded. In my mind, speed is a feature for all web apps so this was unacceptable, especially since it was used to run live, public websites. We spent hundreds of hours trying to speed up the app with little success. This taught me that we needed to having benchmarking tools incorporated into the development cycle from the beginning due to the nature of our product.
  • It’s not about good ideas or bad ideas: it’s about ideas that make people talk. Make some aspect of your product easy and fun to talk about, and make it unique.
  • We really didn’t test the initial product enough. The team pulled the trigger on its initial launches without a significant beta period and without spending a lot of time running QA, scenario testing, task-based testing and the like. When v1.0 launched, glitches and bugs quickly began rearing their head (as they always do), making for delays and laggy user experiences aplenty — something we even mentioned in our early coverage.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • Our biggest self-realization was that we were not users of our own product. We didn’t obsess over it and we didn’t love it. We loved the idea of it. That hurt.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.
  • It may seem surprising that a seemingly successful product could fail, but it happens all the time. Although we arguably found product/market fit, we couldn’t quite crack the business side of things. Building any business is hard, but building a business with a single app offering and half of your runway is especially hard.

Buzzwords

The Innovation landscape is full of buzzwords, here are just a few you will need to know.

  • API – Application Program Interface is a method that uses web address ( http://www.server.com/api/important/action/) to accept requests and deliver results. Learn more about API’s here http://www.programmableweb.com/category/all/apis
  • AR – Augmented reality is where you use a screen on a mobile, tablet or PC to overlay 3D or geospatial information.
  • Big Data – Is about taking a wider view of your business data to find insights and to predict and improve products and services.
  • BYOD – Bring your own device.
  • BYOC – Bring your own cloud.
  • Caching – Using software to deliver data from memory rather than from slower database each time.
  • Cloud – Someone else’s computer that you run software or services on.
  • CouchDB – An Apache designed Key/Value NoSQL JSON store database that focuses on eventual replication.
  • DaaS – Desktop as a service
  • DbaaS – Database as a service (hardware and database software maintained by others but your data).
  • DBMS – Database Management System – the GUI
  • HPC – High-Performance Computing.
  • IaaS – Cloud-based Servers and infrastructure (Google Cloud, Amazon AWS, Digital Ocean and Vultr and Rackspace).
  • IDaaS – Third Party Authorisation management
  • IOPS – Operations per Second –What limitations are on the interface or software in question.
  • IoT – Internet of things are small devices that can display, sense or update information (internet-connected fridge or a button that orders more toilet paper.
  • iPaaS– integration Platform as a Service (software to integrate multiple XaaS)
  • JSON – A better CSV file (read more here)
  • MaaS – Monitoring as a Service (e.g Keymetrics.io)
  • CaaS – Communication as a service (e.g http://www.twillio.com)
  • Micro-services – an existing service that is managed by another vendor (e.g Notifications, login management, email or storage), usually charged by usage.
  • MongoDB – Another Key/Value NoSQL JSON Database that has upfront Replication
  • NoSQL – A No SQL database that stores data in JSON documents instead of normalised related tables.
  • PaaS – A larger stack of SaaS that you can customise from vendors Azure (Active Directory, Compute, Storage Blobs etc), AWS (SQS, RDS, Alasticache, Elastic File System, ), Google Cloud (Compute Engine, App Engine, Data-store ), Rackspace etc.
  • Rate Limiting – Ability to track and limit a user’s request to an API.
  • SaaS – A smaller software component that you can use or integrate (Google Apps, CiscoWebEx, GoTo
  • Scalable – the ability to have a website or service handle thousands to millions of hits and have baked in a way to handle exponential growth.
  • Meeting).
  • Scale Up – Increase the CPU speed and thus workload
  • Scale Out – Adding more servers and distributing the load instead of making servers faster.
  • SQL – A traditional relational database query language.
  • VR – Virtual Reality is where you totally immerse yourself in a 3D world with a head-mounted display.
  • XaaS – Anything as a service.

External or Online Advice

A consultant once joked to our team that their main job was to “Con” and “Insult” you ( CONinSULTant ).  Their main job is to promote what they know/sell and sow seeds of doubt about what you do. Having said that please take my advice with a grain of salt (I am just relaying what I know/prefer).

Consultants need to rapidly convert you to their way of thinking (and services), consultants gloss over what they don’t know and leave you down a happy part solution nirvana (often ignoring your legacy apps or processes, any roadblocks are relished as an opportunity for more money-making).  This is great if you have endless buckets of money and want to rewrite things over and over.

Having consultants design as develop a solution is not all bad but that would make his developer-focused blog post boring.

Microsoft IIS, Apache, NGINX, Lighthttpd are all good web servers but each has a different memory footprint, performance, and features when delivering static v dynamic content and each platform has maximum concurrent users that they can handle a second for a given server configuration.

You don’t need expensive solutions, read this blog post on “How I built an app with 500,000 users in 5 days on a $100 server”

Snip: I assume my apps will be successful. There’s no point in building an app assuming it won’t be successful. I would not be able to sleep if my app gains traction and then dies due to bad tech. I bake minimum viable scalability principles into my app. It’s the difference between happiness and total panic. It’s what I think should be part of an app MVP (Minimum Viable Product).

Blind googling to find the best platform can be misleading as it is hard to compare apples to apples. Take your time and write some code and evaluate for yourself.

  • This guide highly recommends Microsft.NET and IIS Web servers:  https://www.ageofascent.com/2016/02/18/asp-net-core-exeeds-1-15-million-requests-12-6-gbps/
  • This guide says G-WAN, NGINX and Apache are good http://gwan.com/benchmark

Once you start worrying about scalability you start to plan for multiple servers, load balancing, replication and caching be prepared to open your wallet.

I prefer the free NGINX and if I need more grunt down the track I can move to the NGINX Plus as it has loads of advanced scalability and caching options  https://www.nginx.com/products/.
Alternatively, you can use XaaS for everything and have other people worry about the uptime/scaling and data storage but I find that it is inevitable you will need the flexibility of a self-managed server and FULL control of the core processes.

Golden rule = prove it is cheaper/faster/more reliable and don’t just trust someone. 

Common PaaS, SaaS and Self-Managed Server Vendors

Amazon AWS and Azure are the go to cloud vendors who offer robust and flexible offerings.

Azure: https://azure.microsoft.com/en-us/

Amazon AWS: https://aws.amazon.com/

Google cloud has many cloud offerings but product selection is hard. Prices are high and Google tend to kill off products that don’t make money (e.g Google Gears etc).

Google Cloud:

https://cloud.google.com/

Simple Self Managed Servers

If you want a server in the cloud on the cheap Linode and Digital Ocean have you covered.

  • Digital Ocean: http://www.digitalocean.com
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Linode: https://www.linode.com/

High-End Corporate vendors

  • Rackspace: https://www.rackspace.com/en-au/cloud
  • IBM Cloud: http://www.ibm.com/cloud-computing/au/#infrastructure

Other vendors

  • Engineyard: http://www.engineyard.com/
  • Heroku: https://www.heroku.com/
  • Cloud66: http://www.cloud66.com/
  • Parse: DEAD

Moving to Cloud Pro’s

  • Lowers Risk
  • Outsource talent
  • Scale to millions of users/hits
  • Pay for what you use
  • Granular access
  • Potential savings *
  • Lower risk *

Moving to Cloud Con’s

  • Usually billed in USD
  • Limited upload/downloads or API hits a day
  • Intentional tier pain points (Limited storage, hits, CPU, data transfers, Minimum servers).
  • Cheaper multi-tenant servers v expensive dedicated servers with dedicated support
  • Limited IOPS (g 30 API hits a second then $100 per additional 10 Req/sec)
  • XaaS Price changes
  • Not fully integrated (still need code)
  • Latency between Services.
  • Limited access for developers (not granular enough).
  • Security

Vendors can change their prices whenever they want, I had a cluster of MongoDB servers running on AWS (via http://www.mongodb.com/cloud/ ) and one day they said they needed to increase their prices because they underestimated the costs for the AWS servers. They gave me some credit but I was instantly paying more and was also tied to USD (not AUD). A fall in the Australian dollar will impact bills in a big way.

Vendor Uptime:

Not all vendors are stable, do your research on who are the most reliable: https://cloudharmony.com/status

Quick Status Pages of leading vendors.

  • AWS: https://status.aws.amazon.com/
  • Azure: https://azure.microsoft.com/en-us/status/
  • Vultr: https://www.fearby.com/article/setting-vultr-vm-configuring/
  • Digital Ocean: https://status.digitalocean.com/
  • Google: https://status.cloud.google.com/
  • Heroku: https://status.heroku.com/
  • LiNode: https://status.linode.com/
  • Cloud66: http://status.cloud66.com/

Some vendors have patchy uptime

consultant_003

Management Software and Support:

Don’t lock in a vendor(s) until you have tested their services and management interfaces and can accurately forecast your future app costs.

I found that Digital Ocean was the simplest to get started, had capped prices and had the best documentation. However, Digital Ocean do not sell advanced services or advanced support and they did not have servers in Australia.

Google Cloud left a lot to be desired with product selection, setup and documentation. It did not take me long to realize I would be paying a lot more on Google platforms.

Azure was quite clean and crisp but lacked controls I was looking for. Azure is designed to be simple with a professional appearance (I found the default security was not high enough for me unmanaged Ubuntu Servers).  Azure was 4x the cost of Digital Ocean servers and 2x the cost of AWS.

AWS management interfaces were very confusing at first but support was not far away online.  AWS seemed to have the most accurate cost estimators and developer tools to make it my default choice.

Free Trials

When searching for a cloud provider to test look for free trials and have a play before you decide what is best.

https://aws.amazon.com/free/ – 12 Month free trial.

https://azure.microsoft.com/en-us/free/ –  $200 credit.

Digital Ocean 2 moths free for new customers.

Cloudant offered $50 free a month for a single multi-tenant NoSQL database but after as IBM acquisition, the costs seem steep (Financing is available through so it must be expensive). I walked away from IBM because it was going to cost me $4,000 a month for 1 dedicated Cloudant CouchDB Node.

Costs

It is hard to forecast your costs if you do not know what components you will use, what the CPU activity will be and what data will be delivered.

Google and AWS have a confusing mix of base rates, CPU credits, and data costs. You can boost your credits and usage but it will cost you compared to a flat rate server cost.

Digital Ocean and Linode offer great low rates for unmanaged servers and reasonable extra charges other vendors will scalp from the get go but lack the global presence.

Azure is a tad more expensive than AWS and a lot higher than Digital Ocean

At some point you need to spin up some servers and play around and if you need to change to another vendor.  I was tempted by IBM Cloud Ant CouchDB DBaaS but it would have been $4000 USD a month. (it did come with 24/7 techs that monitored the service for me).

Databases

Relational databases like MySQL and SQL Server are solid choices but replication can be tricky. See my guide here.

  • NoSQL database are easier to scale up and out but more care has to be given to the software controlling the data and collisions, Relational databases are harder to scale but are by designed to enforce referential integrity.

Design what you need and then chose a Relational, NoSQL or Mix of databases.  A good API will join a mix of databases but deliver the best of both worlds.

E.g Geographic data may best be served from MongoDB but related customer data from MySQL or MS SQL Server

Database cost will also impact your database decisions. E.g Why set up a SQL Server when a MySQL will do, why set up a Mongo DB cluster when a single MongoDB instance will do.

Also when you scale out the database capabilities vary.

  • Availability – Each client can always read and write data.
  • Consistency – All clients have the same view of the data
  • Partition Tolerance – The System works well despite physical network partitions.

nosql-triangle

Database decisions will impact the code and complexity of your application.

Website and API Endpoint

The website will be the glue that sticks all the pieces together.  An API on a web server ( e.g  https://www.myserver.com/api/v1/do/domething/important ) may trigger these actions.

  1. Check the request origin (Ip ban) – Check IP cache or request new IP lookup
  2. Validate SSL status.
  3. Check the users login tokens (are they logged in) – log output
  4. Check a database (MYSQL)
  5. Check for permissions – is this action allowed to happen?
  6. Check for rate-limiting – had a threshold been exceeded.
  7. Check another database (MongoDB)
  8. Prepare data
  9. Resolve the API request – return the data.

A Web server then becomes very important as it is managing lot. If you decided to use a remote “as a service”  ID management API or application endpoint would each of the steps happen in a reasonable time-frame.  StormPath may be great service for IP auth but I had issues with reliability early on and costs were unpredictable, Google firebase is great at Application endpoints but they can be expensive.

Carefully evaluate the pro’s and cons of going DIY/self-managed versus a mix of “as a service” and full “as a service”.

I find that NGINX and NodeJS is the perfect balance between cost, flexibility, and scalability and risk [ link to my scalability guide ] NodeJS is great for integrating MySQL, API or MongoDB calls into the back end in a non-blocking way.  NodeJS can easily integrate caching and connection pooling to enhance throughput.

Mulesoft is a good (but expensive) API development suite https://www.mulesoft.com/platform/api

Location, Latency and Local Networks.

You will want to try and keep all of your servers and services as close as possible, don’t spin up a digital ocean server in Singapore if your customers are in Australia (the NETFLIX effect will see latency drop off a cliff at night). Also having a database on one vendor and a web server on another vendor may add extra latency issues, try and use the same vendor in the same data centre.

Don’t forget SSL will add about 40ms to any request locally (and up to 200ms for overseas serves), that does impact maximum concurrent users (but you need strong SSL).

  • Application scalability on a budget (my journey)
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
  • The quickest way to setup a scalable development ide and web server
  • More here: https://fearby.com/

Also, remember the servers may have performance limitations (maximum IOPS ) sometimes you need to pay for higher IOPS or performance to get better throughput.

Security

Ensure that everything is secure, logged and you have some sort of IP banning or rate-limiting and session tokens/expiry and or auto log out.

Your servers need to be patched and potential exploits monitored, don’t delay updating software like MySQL and OpenSSL when exploits are known.

Consider getting advice from a company like https://www.whitehack.com.au/ where they can review your code and perform penetration testing.

  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Update OpenSSL on a Digital Ocean VM
  • Adding a commercial SSL certificate to a Digital Ocean VM
  • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

You may want to limit the work you do on authorization management and get a third party to do https://www.okta.com/ or http://www.stormpath.com can help here.

You will certainly need to implement two-factor authentication,  OAuth 2, session tokens, forward security, rate limiting, IP logging, polymorphic data return via API.  Security is a big one.

Here is a benchmark for an API hit overseas with and without SSL

consultant_002-1

Moving my Digital Ocean Server from Singapore to AWS in Australia dropped my API requests to under 200ms (SSL, complete authorization, logging and payload delivery).

Monitoring and Benchmarking

Monitoring your website’s health (CPU, RAM and Memory) along with software and database monitoring is very important to maintain a service.

https://keymetrics.io/ is a great NodeJS service and API monitoring application.

consultant_004

PM2 is a great node module that integrated Key metrics with NodeJS.

CPU BusySiege is a good command-line benchmark took, check out my guide here.

http://www.loader.io is a great service for hitting your website from across the world.

AWS MongoDB Test

End to End Analytics

You should be capturing analytics from end to end (failed logins, invalid packets, user usage etc).  Caching content and blocking bad uses can them be implemented to improve performance.

Developer access

All platforms have varied access to allow developers in to change things.  I prefer the awesome http://www.c9.io for connecting to my servers.

C9 IDE

If you go with high-level SaaS (Microsft CRM, Sitecore CRM etc) you may be locked into outdated software that is hard for developers to modify and support.

Don’t forget your customers.

At this point, you will have a million thoughts on possible solutions and problems but don’t forget to concentrate on what you are developing and it is viable.  Do you have validated customer needs and will you be working to solve those problems?

Project Pre-Mortem

Don’t be afraid to research what could go wrong, are you about to spend money on adding another layer of software to improve something but not solve the problem at hand?

It is a good idea to quickly guess what could go wrong before deciding on a way forward.

  • Server scalability
  • Features not polished
  • Does not meet customer needs
  • Monetization Issues
  • Unknown usage costs
  • Bad advice from consultants
  • Vendors collapsing or being bought out.

Long game

Make sure you choose a vendor that won’t go broke?  Smaller vendors like Parse were gobbled up by Facebook and Facebook closed their doors leaving customers in the lurch.  Even C9.io has been purchased by AWS and their future is uncertain.  Will Linode and Digital Ocean be able to compete against AWS and Azure? Don’t lock yourself into one solution and always have a backup plan.

Do

  • Do know what your goal is.
  • Make a start.
  • Iterate in public.
  • Test everything.

Don’t

  • Don’t trust what you have been told.
  • Don’t develop without a goal.
  • Don’t be attracted to buzzwords, new tech and shiny objects.

Good luck and happy coding.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Edit v1.11

Filed Under: Backup, Business, Cloud, Development, Hosting, Linux, MySQL, NodeJS, Scalability, Scalable, Security, ssl, Uncategorized Tagged With: digital disruption, Innovation

Beyond SSL with Content Security Policy, Public Key Pinning etc

December 6, 2016 by Simon

A big shoutout goes to Troy Hunt and Leo Laporte and Steve Gibson from https://www.grc.com/securitynow for sharing their security knowledge.

Pre-Requisite: SSL Certificate

I have mentioned before how to obtain an A+ rating on your SSL certificate with the help of https://ssllabs.com/ssltest before in my Digital Ocean and AWS and Vultr Ubuntu server (NGINX, NodeJS etc) setup guides. Also an SSL certificate can be free and installed in 1 minute.

I will assume you have an SSL Labs A+ server rating on your site and you want to secure your site some more. You will need to secure your site some more by enabling content headers for Content Security Policy and Public Key Pinning.

Why

Read this article from Troy Hunt that explains why CSP is important: The JavaScript Supply Chain Paradox: SRI, CSP and Trust in Third Party Libraries

HTTP Public Key Pinning

Full credit goes to this site for explaining how to setup HTTP Public Key Pin and (add a NGINX header to reference two new keys that we link to the main certificate). Basically, we need to generate two new certificates on our server (linked to our master certificate from our CA) and deliver the hashes to the client as a header.

 cd /etc/nginx/
 mkdir ssl.bak
 sudo cp -R ./ssl/* ./ssl.bak/
 cd ssl

openssl x509 -pubkey < chained.crt | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64
> Base64Output01Removed###########################=

openssl genrsa -out chained.first.pin.key 4096
> Generating RSA private key, 4096 bit long modulus
> …

openssl req -new -key yourserver.first.pin.key -sha256 -out yourserver.first.pin.csr
> Country Name (2 letter code) [AU]:
> State or Province Name (full name) [Some-State]:
> Locality Name (eg, city) []:
> Organization Name (eg, company) [Internet Widgits Pty Ltd]:
> Organizational Unit Name (eg, section) []:
> Common Name (e.g. server FQDN or YOUR name) []:
> Email Address []:
> Please enter the following ‘extra’ attributes
> to be sent with your certificate request
> A challenge password []:
> An optional company name []:

openssl req -pubkey < yourserver.first.pin.csr | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64
> Base64Output02###########################=

openssl genrsa -out chained.second.pin.key 4096
> Generating RSA private key, 4096 bit long modulus
> …

openssl req -new -key yourserver.second.pin.key -sha256 -out yourserver.second.pin.csr
> Country Name (2 letter code) [AU]:
> State or Province Name (full name) [Some-State]:
> Locality Name (eg, city) []:
> Organization Name (eg, company) [Internet Widgits Pty Ltd]:
> Organizational Unit Name (eg, section) []:
> Common Name (e.g. server FQDN or YOUR name) []:
> Email Address []:
> Please enter the following ‘extra’ attributes
> to be sent with your certificate request
> A challenge password []:
> An optional company name []:

openssl req -pubkey < yourserver.second.pin.csr | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64
> Base64Output03###########################=

# Add the following to the NGINX default configuration

server {
…
add_header Public-Key-Pins ‘pin-sha256=”Base64Output01###########################=”; pin-sha256=”Base64Output02###########################=”; pin-sha256=”Base64Output03###########################=”; max-age=2592000; includeSubDomains’;
…
}

nginx -t
nginx -s reload
/etc/init.d/nginx restart

This should solve the pinning ratings. I can check with https://securityheaders.io

Content Security Policy

Content Security Policy helps prevent cross-site scripting (XSS), clickjacking and other code injection attacks on your site by allowing your site to pre-define where resources can load from. Content Security Policy is supported in modern web browsers only. Here is a good explanation of CSP and a hackers cheat sheet for how to XSS Inject a site.

You can use this site to review your websites (or your bank’s website security): https://securityheaders.io/

I decided to check a big bank’s CSP/XSS configuration.

websecurity-001

St George Bank appears to be missing a number of potential security configurations (above). I ran the checker over a site I was building and I got a missing Content Security Polity warning also.

If your site just delivers text (no images or media) and does not use Google Analytics or content from remote CDN’s then defining a Content Security Policy is easy in NGINX.

add_header Content-Security-Policy "default-src " always;
add_header X-Content-Security-Policy "default-src " always;
add_header X-WebKit-CSP "default-src https: " always;

But chances are you will need to generate a detailed CSP to allow Google Analytics, Font’s and scripts to load/run.

There are loads of sites that will help you generate you a CSP ( here, here etc) but it is best to add the configuration above to your NGINX config then load your website google chrome and look for any CSP errors and then add them into the CSP generator, export to NGINX, save and recheck in google chrome until all issues are solved.

A recent version of Google Chrome will give you a good indication of what resources it blocked (that were not covered in your Content Security Policy).

websecurity-006

I suggest you go to https://report-uri.io/home/generate and for each failing resource resolve that issue by defining the allowed resources in your policy.

After about 20 reloads of my CSP at https://report-uri.io/home/generate on my site and CSP validation with https://cspvalidator.org/ I have a working minimum Content Security Policy allowing resources on my site (real names redacted, note my CDN server that I use for misc resources).

websecurity-007

My Final Content Security Policy.

script-src 'self' 'unsafe-inline' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; style-src 'self' 'unsafe-inline' https://myservername.com:* https://fonts.googleapis.com:*; img-src 'self' https://myservername.com:* https://*.google-analytics.com https://*.google.com; font-src 'self' data: https://myservername.com:* https://myservername-cdn:* https://fonts.googleapis.com:* https://fonts.gstatic.com:*; connect-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; media-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; child-src 'self' https://player.vimeo.com https://www.youtube.com; form-action 'self' https://myservername.com:* https://myservername-cdn:*;

Spaced out to see what is set.

script-src 'self' 'unsafe-inline' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; 
	style-src 'self' 'unsafe-inline' https://myservername.com:* https://fonts.googleapis.com:*; 
	img-src 'self' https://myservername.com:* https://*.google-analytics.com https://*.google.com; 
	font-src 'self' data: https://myservername.com:* https://myservername-cdn:* https://fonts.googleapis.com:* https://fonts.gstatic.com:*; 
	connect-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; 
	media-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; 
	child-src 'self' https://player.vimeo.com https://www.youtube.com; 
	form-action 'self' https://myservername.com:* https://myservername-cdn:*;

Here is what I added to my NGINX configuration (but with my real servers names)

add_header Content-Security-Policy "script-src 'self' 'unsafe-inline' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; style-src 'self' 'unsafe-inline' https://myservername.com:* https://fonts.googleapis.com:*; img-src 'self' https://myservername.com:* https://*.google-analytics.com https://*.google.com; font-src 'self' data: https://myservername.com:* https://myservername-cdn:* https://fonts.googleapis.com:* https://fonts.gstatic.com:*; connect-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; media-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; child-src 'self' https://player.vimeo.com https://www.youtube.com; form-action 'self' https://myservername.com:* https://myservername-cdn:*; " always;
add_header X-Content-Security-Policy "script-src 'self' 'unsafe-inline' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; style-src 'self' 'unsafe-inline' https://myservername.com:* https://fonts.googleapis.com:*; img-src 'self' https://myservername.com:* https://*.google-analytics.com https://*.google.com; font-src 'self' data: https://myservername.com:* https://myservername-cdn:* https://fonts.googleapis.com:* https://fonts.gstatic.com:*; connect-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; media-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; child-src 'self' https://player.vimeo.com https://www.youtube.com; form-action 'self' https://myservername.com:* https://myservername-cdn:*; " always;
add_header X-WebKit-CSP "script-src 'self' 'unsafe-inline' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; style-src 'self' 'unsafe-inline' https://myservername.com:* https://fonts.googleapis.com:*; img-src 'self' https://myservername.com:* https://*.google-analytics.com https://*.google.com; font-src 'self' data: https://myservername.com:* https://myservername-cdn:* https://fonts.googleapis.com:* https://fonts.gstatic.com:*; connect-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; media-src 'self' https://myservername.com:* https://myservername-cdn:* https://*.google-analytics.com https://*.google.com; child-src 'self' https://player.vimeo.com https://www.youtube.com; form-action 'self' https://myservername.com:* https://myservername-cdn:*; " always;

Misc SSL Certificate Issues

https://www.ssllabs.com/ssltest is the go-to site for checking your sites SSL certificate for issues.

websecurity-005

Basic Server testing with asafaweb.com

https://asafaweb.com/ is a great site that tests your server to common security issues. Click on the orange or red buttons for an explanation and resolution.

websecurity-004

Testing with SecurityHeaders.io

If everything is configured you will get all green.

websecurity-003

CVE Exploits Database

After your server is secure you cannot sit back and pat yourself on the back, vulnerabilities can appear overnight and it is up to you to patch and update your server, services and software.

  • NGINX from time to time has vulnerabilities that need urgent patching.
  • OpenSSL needs checking for vulnerabilities from time to time. A bug was found in June this year that required urgent patching (blog post here).
  • Spectre and Meltdown bug

Your Code

Once you have a secure web server, SSL, XSS pinning and other security configuration setup you will need to ensure any code you develop is secure too.

Read the Open Web Application Security Project’s Top 10 Developer Security considerations.

About OWASP.

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Keep Yourself Informed

Follow as many security researchers as you can on Twitter and keep up to date. (e.g 0xDUDE)

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Good luck.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.9 Added Let’s Encrypt info

V1.8 added Troy Hunt article on CSP

v1.7 added link to Hardening a Linux Server link

V1.6 security

Filed Under: Development, Security Tagged With: CSP, security, ssl, XSS

Setting up a fast distributed MySQL environment with SSL

September 13, 2016 by Shane Bishop

The following is a guest post from Shane Bishop from https://ewww.io/ (developer of the awesome EWWW Image Optimizer plugin for WordPress). Ready my review of this plugin here.

ewww3

Setting up a fast distributed MySQL environment with SSL

I’m a big fan of redundancy and distribution when it comes to network services. I don’t like to keep all my servers in one location, or even with a single provider. I’m currently using three different providers right now for various services. But when it comes to database communication, this poses a bit of a problem. Naturally, you would implement a firewall to restrict connections only to specific IP addresses, but if you’ve got servers all across the United States (or the globe), the communication is completely unencrypted by default.

Fortunately, MySQL has the ability to secure those communications and even require that specific user accounts use encryption for all communication. So, I’m going to show you how to setup that encryption, give a brief overview of setting up MySQL replication and give you several examples of different ways to securely connect to your database server(s). I used several different resources in setting this up for EWWW I.O. but none of them had everything I needed, or some had critical errors in them:

Setting up MySQL and secure connections

Getting Started with MySQL over SSL

How to enable SSL for MySQL server and client

I use Debian 8 (fewer major releases than Ubuntu, and rock-solid stability), so these instructions will apply to MySQL 5.5 and PHP 5.6, although most of it will work fine on any system. If you aren’t using PHP, you can just skip that section, and apply this to MySQL client connections, and replication. I’ll try to point out any areas where you might have difficulty on other versions, and you’ll need to modify any installation steps that use apt-get to use yum instead if you’re on CentOS, RHEL, or SuSE. If you’re running Windows, sorry, but just stop it. I would never trust a Windows server to do any of these things on the public Internet even with secured connections. You could attempt to do some of this on a Windows box for testing, but you can setup a Linux virtual machine using VirtualBox for free if you really want to test things out locally.

Setup the Server

First, we need to install the MySQL server on our system (you should always use sudo, instead of logging in as root, as a matter of “best practice”):

sudo apt-get install mysql-server

The installation will ask for a root password, and for you to confirm it. This is the account that has full and complete privileges on your MySQL server, so pick a good one. If this gets compromised, all is lost (or very nearly). Backups are your best friend, but even then it might be difficult to know what damage was done, and when. You’ll also want to run this, to make sure your server isn’t allowing test/guest access:

sudo mysql_secure_installation

You should answer yes to just about everything, although you don’t have to change your root password if you already set a good one. And just to make sure I’m clear on this. The root password here is not the same as the root password for the server itself. This root password is only for MySQL. You shouldn’t even ever use the root login on your server, EVER. It should be disabled so that you can only run privileged operations as sudo. Which you can do like this:

sudo passwd -l root

That command just disabled the root user, and should also be a good test to verify you already have an account that can sudo successfully, although I’d recommend testing it with something a little less drastic before you disable the root login.

Generating Certificates & Keys for the server

Normally, setting up secure connections involves purchasing a certificate from an established Certificate Authority (CA), and then downloading that certificate to your machine. However, the prevailing attitude with MySQL seems to be that you should build your own CA so that no one else has any influence on the private keys used to issue your server certificates. That said, you can still purchase a cert if that feels like the right route to go for you. Every organization has different needs, and I’m a bit new to the MySQL SSL topic, so I won’t pretend to be an expert on what everyone should do.

The Certificate Authority consists of a private key, and a CA certificate. These are then used to generate the server and client certificates. Each time you generate a certificate, you first need a private key. These private keys cannot be allowed to fall into the wrong hands, but you also need to have them available on the server, as they are used in establishing a secure connection. So if anyone else has access to your server, you should make sure the permissions are set so that only the root user (or someone with sudo privileges) can access them.

The CA and your server’s private key are used to authenticate the certificate that the server uses when it starts up, and the CA certificate is also used to validate any incoming client certificates. By the same token, the client will use that same CA certificate to validate the server’s certificate as well. I store the bits necessary in the /etc/mysql/ directory, so navigate into that directory, and we’ll use that as a sort of “working directory”. Also, the first command here lets you establish a “sudo shell” so that you don’t have to type sudo in front of every command. Let’s generate the CA private key:

sudo -s
cd /etc/mysql/
openssl genrsa 2048 > cakey.pem

Next, generate a certificate based on that private key:

openssl req -sha256 -new -x509 -nodes -days 3650 -key cakey.pem > cacert.pem

Of note are the -sha256 flag (do not use -sha1 anymore, it is weak), and the certificate expiration, set by “-days 3650” (10 years). Answer all the questions as best you can. The common name (CN) here is usually the hostname of the server, and I try to use the same CN throughout the process, although it shouldn’t really matter what you choose as the CN. If you follow my instructions, the CN will not be validated, only the client and server certificates get validated against the CA cert, as I already mentioned. Especially if you have multiple servers, and multiple servers acting as clients, the CN values would be all over the place, so best to keep it simple.

So the CA is now setup, and we need a private key for the server itself. We’ll generate the key and the certificate signing request (CSR) all at once:

openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout server-key.pem > server-csr.pem

This will ask many of the same questions, answer them however you want, but be sure to leave the passphrase empty. This key will be needed by the MySQL service/daemon on startup, and a password would prevent MySQL from starting automatically. We also need to export the private key into the RSA format, or MySQL won’t be able to read it:

openssl rsa -in server-key.pem -out server-key.pem

Lastly, we create the server certificate using the CSR (based on the server’s private key) along with the CA certificate and key:

openssl x509 -sha256 -req -in server-csr.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > server-cert.pem

Now we have what we need for the server end of things, so let’s edit our MySQL config in /etc/mysql/my.cnf to contain these lines in the [mysqld] section:

ssl-ca=/etc/mysql/cacert.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

If you are using Debian, those lines are probably already present, but commented out (with a # in front of them). Just remove the # from those three lines. If this is a fresh install, you’ll also want to set the bind-address so that it will allow communication from other servers:

bind-address = 198.51.100.10 # replace this with your actual IP address

or you can let it bind to all interfaces (if you have multiple IP addresses):

bind-address = *

Then restart the MySQL service:

sudo service mysql restart

Permissions

If this is an existing MySQL setup, you’ll want to wait until you have all the client connections setup to require SSL, but on a new install, you can run this to setup a new user with SSL required:

GRANT ALL PRIVILEGES ON 'database'.* TO 'database-user'@'%' IDENTIFIED BY 'reallysecurepassword' REQUIRE SSL;

I recommend creating individual user accounts for each database you have, so substitute the name of your database in the above command, as well as replacing the database-user and “really secure password” with suitable values. The command above also allows them to connect from anywhere in the world, and you may only want them to connect from a specific host, so you would replace the ‘%’ with the IP address of the client. I prefer to use my firewall to determine who can connect, as it is a bit easier than running a GRANT statement for every single host that is permitted. One could use a wildcard hostname like *.example.com but that would entail a DNS lookup for every connection unless you make sure to list all your addresses in /etc/hosts on the server (yuck). Additionally, using your firewall to limit which hosts can connect helps prevent brute-force attacks. I use ufw for that, which is a nice and simple command-line interface to iptables. You also need to run this after you GRANT privileges:

FLUSH PRIVILEGES;

Generating a Certificate and Key for the client

With most forms of encryption, only the server needs a certificate and key, but with MySQL, both server and client can have encryption keys. A quick test from my local machine indicated that it would automatically trust the server cert when using the MySQL client, but we’ll setup the client to use encryption just to be safe. Since we already have a CA setup on the server, we’ll generate the client cert and key on the server. First, the private key and CSR:

openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout client-key.pem > client-csr.pem

Again, we need to export the key to the RSA format, or MySQL won’t be able to view it:

openssl rsa -in client-key.pem -out client-key.pem

And last step is to create the certificate, which is again based off a CSR generated from the client key, and sign the certificate with the CA cert and key:

openssl x509 -sha256 -req -in client-req.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > client-cert.pem

We now need to copy three files to the client. The certs are just text files, so you can copy and paste them, or you can use scp to transfer them:

  • cacert.pem
  • client-key.pem
  • client-cert.pem

If you don’t need the full mysql-server on the client, or you just want to test it out, you can install the mysql-client like so:

sudo apt-get install mysql-client

Then, open /etc/mysql/my.cnf and put these three lines in the [client] section (usually near the top):

ssl-ca = /etc/mysql/cacert.pem
ssl-cert = /etc/mysql/client-cert.pem
ssl-key = /etc/mysql/client-key.pem

You can then connect to your server like so:

mysql -h 198.51.100.10 -u database-user -p

It will ask for a password, which you set to something really awesome and secure, right? At the MySQL prompt, you can just type the following command shortcut, and look for the SSL line, which should say something like “Cipher in use is …”

\s

You can also specify the –ssl-ca, –ssl-cert, and –ssl-key settings on the command line in the ‘mysql’ command to set the locations dynamically if need be. You may also be able to put them in your .my.cnf file (the leading dot makes it a hidden file, and it should live in ~/ which is your home directory). So for me that might be /home/shanebishop/.my.cnf

Using SSL for mysqldump

To my knowledge, mysqldump does not use the [client] settings, so you can specify the cert and key locations on the command line like I mentioned, or you can add them to the [mysqldump] section of /etc/mysql/my.cnf. To make sure SSL is enabled, I run it like so:

mysqldump --ssl -h 198.51.100.10 -u database-user -p reallysecurepassword > database-backup.sql

Setup Secure Connection from PHP

That’s all well and good, but most of the time you won’t be manually logging in with the mysql client, although mysqldump is very handy for automated nightly backups. I’m going to show you how to use SSL in a couple other areas, the first of which is PHP. It’s recommended to use the “native driver” packages, but from what I could see, the primary benefit of the native driver is decreased memory consumption.  There just isn’t much to see in the way of speed improvement, but perhaps I didn’t look long enough. However, being one to follow what the “experts” say, you can install MySQL support in PHP like so:

sudo apt-get install php5-mysqlnd

If you are using PHP 7 on a newer version of Ubuntu, the “native driver” package is now standard:

sudo apt-get install php7.0-mysql

If you are on a version of PHP less than 5.6, you can use the example code at the Percona Blog. However, in PHP 5.6+, certificate validation is a bit more strict, and early versions just fell over when trying to use the mysqli class with self-signed certificates like we have. Now that the dust has settled with PHP 5.6 though, we can connect like so:

<?php
$server = '198.51.100.10';
$dbuser = 'database-user';
$dbpass = '[email protected]@s$worD';
$database = 'database';
$connection = mysqli_init();
if ( ! mysqli_real_connect( $connection, $server, $dbuser, $dbpass', $database, 3306, '/var/run/mysqld/mysqld.sock', MYSQLI_CLIENT_SSL ) ) { //optimize1
    error_log( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
    die( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
}
$result = mysqli_query( $connection, "SHOW STATUS like 'Ssl_cipher'" );
print_r( mysqli_fetch_assoc( $result ) );
mysqli_close( $connection );
?>

Saving this as mysqli-ssl-test.php, you can run it like this, and you should get similar output:

[email protected]:~$ php mysqli-ssl-test.php
Array
(
  [Variable_name] => Ssl_cipher
  [Value] => DHE-RSA-AES256-SHA
)

Setup Secure (SSL) Replication

That’s all fine for a couple servers, but at EWWW.IO. I quickly realized I could speed things up if each server had a copy of the database. In particular, a significant speed improvement can be had if you setup all SELECT queries to use the local database (replicated from the master). While a query to the master server might take 50ms or more, querying the local database gives you sub-millisecond query times. Beyond that, I also wanted to have redundant write ability, so I set up two masters that would replicate off each other and ensure I never miss an UPDATE/INSERT/DELETE transaction if one of them dies. I’ve been running this setup since the Fall of 2013, and it has worked quite well. There are a few things you have to watch out for. The biggest is if a master server has a hard reboot, and MySQL doesn’t get shut down properly, you have to re-setup the replication on any slaves that communicate with that master, as the binary log gets corrupted. You also have to resync the other master in a similar manner.

The other things to be careful of are conflicting INSERT statements. If you try to INSERT two records with the same primary key from two different servers, it will cause a collision if those keys are set to be UNIQUE. You also have to be careful if you are using numerical values to track various data points. Use MySQLs built-in arithmetic, rather than trying to query a value, add to it in your code, and then updating the new value in a separate query.

So first I’ll show you how to setup replication (just basic master to slave), and then how to make sure that data is encrypted in transit. We should already have the MySQL server installed from above, so now we need to make some changes to the master configuration in /etc/mysql/my.cnf. All of these changes should be made in the [mysqld] section:

max_connections = 500 # the default is 100, and if you get a lot of servers running in your pool, that may not cut it
server-id = 1 # any numerical value will do, but every server should have a unique ID, I started at 1 for simplicity
log-bin = /var/log/mysql/mysql-bin.log
log-slave-updates = true # this one is only needed if you're running a dual-master setup

I’ve also just discovered that it is recommended to set sync_binlog to 1 when using InnoDB, which I am. I haven’t had a chance to see how that impacts performance, so I’ll update this after I’ve had a chance to play with it. The beauty of that is it *should* avoid the problems with a server crash that I mentioned above. At most, you would lose 1 transaction due to an improper server shutdown. All my servers use SSD, so the performance hit should be minimal, but if you’re using regular spinning platters, then be careful with the sync_binlog setting.

Next, we do some changes on the slave config:

server-id = 2 # make sure the id is unique
report-host = myserver.example.com # this should also be unique, so that your master knows which slave it is talking to
log-bin = /var/log/mysql/mysql-bin.log

Once that is setup, you can run a GRANT statement similar to the one above to add a user to do replication, or you can just give that user REPLICATION_SLAVE privileges.

IMPORTANT: If you run this on an existing slave-master setup, it will break replication, as the REQUIRE SSL statement seems to apply to all privileges granted to this user, and we haven’t told it what certificate and key to use. So run the CHANGE MASTER TO statement further down, and then come back here to enforce SSL for your replication user.

GRANT REPLICATION SLAVE ON *.* TO 'database-user'@'%' REQUIRE SSL;

Now we’re ready to synchronize the database from the master to the slave the first time. The slave needs 3 things:

  1. a dump of the existing data
  2. the binary log filename, as MySQL adds a numerical identifier to the log-bin setting above, and increments this periodically as it the binary logs hit their max size
  3. the position within the binary log where the slave should start applying changes

The hard way (that I used 3 years ago), can be found in the MySQL Documentation. The easy way is to use mysqldump (found on a different page in the MySQL docs), which you probably would have used anyway for obtaining a dump of the existing data:

mysqldump --all-databases --master-data -u root -p > dbdump.db

By using the –master-data flag, it will insert items #2 and #3 into the SQL file generated, and you will avoid having to hand type the binary log filename and coordinates. At any rate, you then need to login via your MySQL client on the slave server, and run a few commands at the MySQL prompt to prep the slave for the import (replacing the values as appropriate:

mysql -uroot -p
mysql> STOP SLAVE;
mysql> CHANGE MASTER TO
    -> MASTER_HOST='master_host_name',
    -> MASTER_USER='replication_user_name',
    -> MASTER_PASSWORD='replication_password';
exit

Then you can import the dbdump.db file (copy it from the master using SCP or SFTP):

mysql -uroot -p < dbdump.db

Once that is imported, we want to make sure our replication is using SSL. You can also run this on an existing server to upgrade the connection to SSL, but be sure to STOP SLAVE first:

mysql> CHANGE MASTER TO MASTER_SSL=1,
    -> MASTER_SSL_CA='/etc/mysql/cacert.pem',
    -> MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
    -> MASTER_SSL_KEY='/etc/mysql/client-key.pem';

After that, you can start the slave:

START SLAVE;

Give it a few seconds, but you should be able to run this to check the status pretty quick:

SHOW SLAVE STATUS\G;

A successfully running slave should say something like “Waiting for master to send event”, which simply indicates that it has applied all transactions from the master, and is not lagging behind.

If you have additional slaves to setup, you can use the same exact dbdump.db and all the SQL statements that followed the mysqldump command, but if you add them a month or so down the road, there are two ways of doing it:

  1. Grab a fresh database dump using the mysqldump command, and repeat all of the steps that followed the mysqldump command above.
  2. Stop the MySQL service on an existing slave and the new slave. Then copy the /var/lib/mysql/ folder to the new slave and make sure it is owned by the MySQL user/group: chown -R mysql:mysql /var/lib/mysql/ Lastly, start both slaves up again, and they’ll catch up pretty quick to the master.

Conclusion

In a distributed environment, securing your MySQL communications is an important step in preventing unauthorized access to your services. While it can be a bit daunting to put all the pieces together, it is well worth the effort to make sure no one can intercept traffic from your MySQL server(s). The EWWW I.O. API supports encryption at every layer of the stack, to make sure our customer information stays private and secure. Doing so in your environment creates trust with your user base, and customer trust is a precious commodity.

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Shane Bishop

Contact: https://ewww.io/contact-us/

Twitter: https://twitter.com/nosilver4u

Facebook: https://www.facebook.com/ewwwio/

Check out the fearby.com guide on bulk optimizing images automatically in WordPress.  Official Site here: https://ewww.io/

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.1 added shodan.io info

Filed Under: Cloud, Development, Linux, MySQL, Scalable, Security, ssl, VM Tagged With: certificate, cloud, debian, destributed, encrypted, fast, myswl, ssl

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

August 16, 2016 by Simon Fearby

This blog lists the actions I went through to setup an AWS EC2 Ubuntu Server and add the usual applications. This follows on from my guide to setup a Digital Ocean server server guide and Vultr setup guide.

Variable Pricing

Amazon Web Services give developers 12 months of free access to a range of products. Amazon don’ t have flat rate fees like Digital Ocean instead, AWS grant you a minimum level of CPU credits for each server type. A “t2.micro” (1CPU/1GB ram $0.013c/hour server) gets 6 CPU credits an hour.  That is enough for a CPU to run at 10% all the time, you can bank up to 144 CPU credits that you can use when the application spikes.  The “t2.micro” is the free tier server (costs nothing for 12 months), but when the trial runs our that’s $9.50 a month.  The next server up is a “t2.small” (1 CPU, 2GB ram) and you get 12 CCPUredits and hour and can bank 288, that enough for 20% CPU usage all the time.  

The “t2.medium” server (2 CPU’s, 4GB Ram), allows 40% CPU usage credits, 24 CPU credits an hour with 576 bankable.  That server costs $0.052c and hour and is $38 a month. Don’t forget to cache content. 

I used about 40 CPU credits generating a 4096bit secure prime Diffie-Hellman key for an SSL Certificate on my t2.micro server. More Information on AWS Instance pricing here and here.

Creating an AWS Account (Free Trial)

The signup process is simple.

  1. Create a free AWS account.
  2. Enter your CC details (for any non-free services) and submit your account.

It will take 2 days or more for Amazon to manually approve your account.  When you have been approved, navigate to https://console.aws.amazon.com login and set your region in the top right next to your name (in my case I will go with Australia ‘ap-southeast-2‘).

My console home is now: https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-2#LaunchInstanceWizard

Create a Server

You can follow the prompts to set up a free tier EC2 Ubuntu server here.

1. Choose Ubuntu EC2

2. Choose Instance Type: t2-micro (1x CPU, 1GB Ram)

3. Configure Instance: 1

4. Add Storage: /dev/sda1, 8GB+, 10-3000 IOPS

5. Tag Instance: Your own role specific tags

6. Configure Security Group: Default firewall rules.

7. Review

Tip: Create a 25GB volume (instead of 8GB) or you will need to add an extra volume mount it with the following commands.

sudo nano /etc/fstab
(append the following line)
/dev/xvdf    /mydata    ext4    defaults,nofail    0    2
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /mydata
mount -a
cd /mydata
ls -al

Part of theEC2 server setup was to save a .PEM file to your SSH folder on your local PC ( ~/.ssh/mysererkeypair.pem).

You will need to secure the file:

chmod 400 ~/.ssh/mysererkeypair.pem

Before we connect to the server we need to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

DNS

You will need to assign a status IP to your server (apparently the public IP is not static). Here is a good read on connecting a domain name to the IP and assigning an elastic IP to your server.  Once you have assigned an elastic IP you can point your domain to your instance.

AWS IP

Installing the Amazon Command Line Interface utils on your local PC

This is required to see your servers console screen and then connect via SSH.

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
/usr/local/bin/aws --version

You now need to configure your AWS CLI, first generate Access Keys here. While you are there setup Multi-Factor Authentication with Google Authenticator.

aws configure
> AWS Access Key ID: {You AWS API Key}
> AWS Secret Access Key: {You AWS Secret Accss Key}
> Default region name: ap-southeast-2 
> Default output format: text

Once you have configured your CLI you can connect and review your Ubuntu console output (the instance ID can be found in your EC2 server list).

aws ec2 get-console-output --instance-id i-123456789

Now you can hopefully connect to your server and accept any certificates to finish the connection.

ssh -i ~/.ssh/myserverpair.pem [email protected]

AWS Console

Success, I can now access my AWS Instance.

Setting the Time and Daylight Savings.

Check your time.

sudo hwclock --show
> Fri 21 Oct 2016 11:58:44 PM AEDT  -0.814403 seconds

My Daylight savings have not kicked in.

Install ntp service

sudo apt install ntp

Set your Timezone

dpkg-reconfigure tzdata

Go to http://www.pool.ntp.org/zone/au and find my NTP server (or go here if you are outside Australia)

server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org
server 3.au.pool.ntp.org

Add the NTP servers to “/etc/ntp.conf” and restart he NTP service.

sudo service ntp restart

Now check your time again and you should have the right time.

sudo hwclock --show
> Fri 21 Oct 2016 11:07:38 PM AEDT  -0.966273 seconds

🙂

Installing NGINX

I am going to be installing the latest v1.11.1 mainline development (non-legacy version). Beware of bugs and breaking changes here.

sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

NGINX is now installed. Try and get to your domain via port 80 (if it fails to load, check your firewall).

Installing NodeJS

Here is how you can install the latest NGINX (development build), beware of bugs and frequent changes. Read the API docs here.

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

NodeJS is installed.

Installing MySQL

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
sudo mysql_install_db
sudo mysql_secure_installation
service mysql status

Install PHP 7.x and PHP7.0-FPM

I am going to install PHP 7 due to the speed improvements over 5.x.  Below were the commands I entered to install PHP (thanks to this guide)

sudo add-apt-repository ppa:ondrej/php
sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm
sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

NGINX Configuration

NGINX can be a bit tricky to setup for newbies and your configuration will certainly be different but here is mine (so far):

File: /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /var/run/nginx.pid;
events {
        worker_connections 1024;
        multi_accept on;
}
http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;
        client_body_buffer_size      128k;
        client_max_body_size         10m;
        client_header_buffer_size    1k;
        large_client_header_buffers  4 4k;
        output_buffers               1 32k;
        postpone_output              1460;

        proxy_headers_hash_max_size 2048;
        proxy_headers_hash_bucket_size 512;

        client_header_timeout  1m;
        client_body_timeout    1m;
        send_timeout           1m;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        # ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        gzip on;
	gzip_disable "msie6";
	gzip_vary on;
	gzip_proxied any;
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_min_length 256;
	gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6

        access_log /var/log/nginx/myservername.com.log;

        root /usr/share/nginx/www;
        index index.php index.html index.htm;

        server_name www.myservername.com myservername.com localhost;

        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking

        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;


        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;

        # This is handled with the header above.
        #rewrite ^/(.*) https://myservername.com/$1 permanent;

        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }

        fastcgi_param PHP_VALUE "memory_limit = 512M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
                try_files $uri =404;

                # include snippets/fastcgi-php.conf;

                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }

        # deny access to .htaccess files, if Apache's document root
        #location ~ /.ht {
        #       deny all;
        #}
}

Test and Reload NGINX Config

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Don’t forget to test PHP with a script that calls ‘phpinfo()’.

Install PhpMyAdmin

Here is how you can install the latest branch of phpmyadmin into NGINX (no apache)

cd /usr/share/nginx/www
mkdir my/secure/secure/folder
cd my/secure/secure/folder
sudo apt-get install git
git clone --depth=1 --branch=STABLE https://github.com/phpmyadmin/phpmyadmin.git

If you need to import databases into MySQL you will need to enable file uploads in PHP and set file upload limits.  Review this guide to enable uploads in phpMyAdmin.  Also if your database is large you may also need to change the “client_max_body_size” settings on nginx.conf ( see guide here ). Don’t forget to disable uploads and reduce size limits in NGINX and PHP when you have uploaded databases.

Note: phpMyAdmin can be a pain to install so don’t be afrait of using an alternative management gui.  Here is a good list of MySQL management interfaces. Also check your OS App store for native mysql database management clients.

Install an FTP Server

Follow this guide here then..

sudo nano /etc/vsftpd.conf
write_enable=YES
sudo service vsftpd restart

Install: oracle-java8 (using this guide)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
cd /usr/lib/jvm/java-8-oracle
ls -al
sudo nano /etc/environment
> append "JAVA_HOME="/usr/lib/jvm/java-8-oracle""
echo $JAVA_HOME
sudo update-alternatives --config java

Install: ncdu – Interactive tree based folder usage utility

sudo apt-get install ncdu
sudo ncdu /

Install: pydf – better quick disk check tool

sudo apt-get install pydf
sudo pydf

Install: rcconf – display startup processes (handy when confirming pm2 was running).

sudo apt-get install rcconf
sudo rcconf

I started “php7.0-fpm” as it was not starting on boot.

I had an issue where PM2 was not starting up at server reboot and reporting to https://app.keymetrics.io.  I ended up repairing the /etc/init.d/pm2-init.sh as mentioned here.

sudo nano /etc/init.d/pm2-init.sh
# edit start function to look like this
...
start() {
    echo "Starting $NAME"
    export PM2_HOME="/home/ubuntu/.pm2" # <== add this line
    super $PM2 resurrect#
}
...

Install IpTraf – Network Packet Monitor

sudo apt-get install iptraf
sudo iptraf

Install JQ– JSON Command Line Utility

sudo apt-get install jq
# Downlaod and display json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Install Ruby – Below commands a bit out of order due to some command not working for unknown reasons.

sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
sudo gem install bundler
sudo git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
sudo git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
sudo git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
sudo echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
sudo echo 'eval "$(rbenv init -)"' >> ~/.bashrc
sudo exec $SHELL
sudo rbenv install 2.2.3
sudo rbenv global 2.2.3
sudo rbenv rehash
ruby -y
sudo apt-get install ruby-dev

Install Twitter CLI – https://github.com/sferik/t

sudo gem install t
sudo t authorize# follow the prompts

Mutt (send mail by command line utility)

Help site: https://wiki.ubuntu.com/Mutt

sudo nano /etc/hosts
127.0.0.1 localhost localhost.localdomain xxx.xxx.xxx.xxx yourdomain.com yourdomain

Configuration:

sudo apt-get install mutt
[configuration none]
sudo nano /etc/postfix/main.cf
[setup a]

Configure postfix guide here

Extend the History commands history

I love the history command and here is how you can expand it’s hsitory and ignore duplicates.

HISTSIZE=10000
HISTCONTROL=ignoredups

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

Cont…

Next: I will add an SSL cert, lock down the server and setup Node Proxies.

If this guide was helpful please consider donating a few dollars to keep me caffeinated.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.61 added vultr guide link

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Security Tagged With: AWS, server

Application scalability on a budget (my journey)

August 12, 2016 by Simon Fearby

If you have read my other guides on https://www.fearby.com you may tell I like the self-managed Ubuntu servers you can buy from Digital Ocean for as low as $5 a month  (click here to get $10 in free credit and start your server in 5 minutes ). Vultr has servers as low as $2.5 a month. Digital Ocean is a great place to start up your own server in the cloud, install some software and deploy some web apps or backend (API/databases/content) for mobile apps or services.  If you need more memory, processor cores or hard drive storage simple shutdown your Digital Ocean server, click a few options to increase your server resources and you are good to go (this is called “scaling up“). Don’t forget to cache content to limit usage.

This scalability guide is a work in progress (watch this space). My aim is to get 2000 concurrent users a second serving geo queries (like PokeMon Go) for under $80 a month (1x server and 1x mongoDB cluster).  Currently serving 600~1200/sec.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Estimating Costs

If you don’t estimate costs you are planning to fail.

"By failing to prepare you are preparing to fail." - Benjamin Frankin

Estimate the minimum users you need to remain viable and then the expected maximum uses you need to handle. What will this cost?

Planning for success

Anyone who has researched application scalability has come across articles on apps that have launched and crashed under load at launch.  Even governments can spend tens of millions on developing a scalable solution, plan for years and fail dismally on launch (check out the Australian Census disaster).  The Australian government contracted IBM to develop a solution to receive up to 15 million census submissions between the 28th of July to the 5th of September. IBM designed a system and a third party performance test planned up to 400 submissions a second but the maximum submissions received on census night before the system crashed was only o154 submissions a second. Predicting application usage can be hard, in the case of the Australian census the bulk of people logged on to submit census data on the recommended night of the 9th of August 2016.

Sticking to a budget

This guide is not for people with deep pockets wanting to deploy a service to 15 million people but for solo app developers or small non-funded startups on a serious budget.  If you want a very reliable scalable solution or service provider you may want to skip this article and check out services by the following vendors.

  • Firebase
  • Azure (good guides by Troy Hunt: here, here and here).
  • Amazon Web Services
  • Google Cloud
  • NGINX Plus

The above vendors have what seems like an infinite array of products and services that can form part of your solution but beware, the more products you use the more complex it will be and the higher the costs.  A popular app can be an expensive app. That’s why I like Digital Ocean as you don’t need a degree to predict and plan you servers average usage and buy extra resource credits if you go over predicted limits. With Digital Ocean you buy a virtual server and you get known Memory, Storage and Data transfer limits.

Let’s go over topics that you will need to consider when designing or building a scalable app on a budget.

Application Design

Your application needs will ultimately decide the technology and servers you require.

  • A simple business app that shares events, products and contacts would require a basic server and MySQL database.
  • A turn-based multiplayer app for a few hundred people would require more server resources and endpoints (a NGINX, NODEJS and an optimized MySQL database would be ok).
  • A larger augmented reality app for thousands of people would require a mix of databases and servers to separate the workload (a NGINX webserver and NodeJS powered API talking to a MySQL database to handle logins and a single server NoSQL database for the bulk of the shared data).
  • An augmented reality app with tens of thousands of users (a NGINX web server, NodeJS powered API talking to a MySQL database to handle logins and NoSQL cluster for the bulk of the shared data).
  • A business critical multi-user application with real-time chat – are you sure you are on a budget as this will require a full solution from Azure Firebase or Amazon Web Serers.

A native app, hybrid app or full web app can drastically change how your application works ( learn the difference here ).

Location, location, location.

You want your server and resources to be as close to your customers as possible, this is one rule that cannot be broken. If you need to spend more money to buy a server in a location closer to your customers do it.

My Setup

I have a Digital Ocean server with 2 cores and 2GB of ram in Singapore that I use to test and develop apps. That one server has MySQL, NGINX, NodeJS , PHP and many scripts running on it in the background.  I also have a MongoDB cluster (3 servers) running on AWS in Sydney via MongoDB.com.  I looked into CouchDB via Cloudant but needed the Geo JSON features with fair dedicated pricing. I am considering moving to an Ubuntu server off Digital Ocean (in Singapore) and onto AWS server (in Sydney). I am using promise based NodeJS calls where possible to prevent non blocking calls to the operating system, database or web.  Update: I moved to a Vultr domain (article here)

Here is a benchmark for HTTP and HTTPS request from Rural NSW to Sydney Australia, then Melbourne, then Adelaide the Perth then Singapore to a Node Server on an NGINX Server that does a call back to Sydney Australia to get a GeoQuery from a large database and return to back to the customer via Singapore.

SSL

SSL will add processing overheads and latency period.

Here is a breakdown of the hops from my desktop in Regional NSW making a network call to my Digital Ocean Server in Singapore (with private parts redacted or masked).

traceroute to destination-server-redacted.com (###.###.###.##), 64 hops max, 52 byte packets
 1  192-168-1-1 (192.168.1.1)  11.034 ms  6.180 ms  2.169 ms
 2  xx.xx.xx.xxx.isp.com.au (xx.xx.xx.xxx)  32.396 ms  37.118 ms  33.749 ms
 3  xxx-xxx-xxx-xxx (xxx.xxx.xxx.xxx)  40.676 ms  63.648 ms  28.446 ms
 4  syd-gls-har-wgw1-be-100 (203.221.3.7)  38.736 ms  38.549 ms  29.584 ms
 5  203-219-107-198.static.tpgi.com.au (203.219.107.198)  27.980 ms  38.568 ms  43.879 ms
 6  tengige0-3-0-19.chw-edge901.sydney.telstra.net (139.130.209.229)  30.304 ms  35.090 ms  43.836 ms
 7  bundle-ether13.chw-core10.sydney.telstra.net (203.50.11.98)  29.477 ms  28.705 ms  40.764 ms
 8  bundle-ether8.exi-core10.melbourne.telstra.net (203.50.11.125)  41.885 ms  50.211 ms  45.917 ms
 9  bundle-ether5.way-core4.adelaide.telstra.net (203.50.11.92)  66.795 ms  59.570 ms  59.084 ms
10  bundle-ether5.pie-core1.perth.telstra.net (203.50.11.17)  90.671 ms  91.315 ms  89.123 ms
11  203.50.9.2 (203.50.9.2) 80.295 ms  82.578 ms  85.224 ms
12  i-0-0-1-0.skdi-core01.bx.telstraglobal.net (Singapore) (202.84.143.2)  132.445 ms  129.205 ms  147.320 ms
13  i-0-1-0-0.istt04.bi.telstraglobal.net (202.84.243.2)  156.488 ms
    202.84.244.42 (202.84.244.42)  161.982 ms
    i-0-0-0-4.istt04.bi.telstraglobal.net (202.84.243.110)  160.952 ms
14  unknown.telstraglobal.net (202.127.73.138)  155.392 ms  152.938 ms  197.915 ms
15  * * *
16  destination-server-redacted.com (xx.xx.xx.xxx)  177.883 ms  158.938 ms  153.433 ms

160ms to send a request to the server.  This is on a good day when the Netflix Effect is not killing links across Australia.

Here is the route for a call from the server above to the MongoDB Cluster on an Amazon Web Services in Sydney from the Digital Ocean Server in Singapore.

traceroute to redactedname-shard-00-00-nvjmn.mongodb.net (##.##.##.##), 30 hops max, 60 byte packets
 1  ###.###.###.### (###.###.###.###)  0.475 ms ###.###.###.### (###.###.###.###)  0.494 ms ###.###.###.### (###.###.###.###)  0.405 ms
 2  138.197.250.212 (138.197.250.212)  0.367 ms 138.197.250.216 (138.197.250.216)  0.392 ms  0.377 ms
 3  unknown.telstraglobal.net (202.127.73.137)  1.460 ms 138.197.250.201 (138.197.250.201)  0.283 ms unknown.telstraglobal.net (202.127.73.137)  1.456 ms
 4  i-0-2-0-10.istt-core02.bi.telstraglobal.net (202.84.225.222)  1.338 ms i-0-4-0-0.istt-core02.bi.telstraglobal.net (202.84.225.233)  3.817 ms unknown.telstraglobal.net (202.127.73.137)  1.443 ms
 5  i-0-2-0-9.istt-core02.bi.telstraglobal.net (202.84.225.218)  1.270 ms i-0-1-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.157)  50.869 ms i-0-0-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.153)  49.789 ms
 6  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  107.395 ms  108.350 ms  105.924 ms
 7  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  105.911 ms 21459.tauc01.cu.telstraglobal.net (134.159.124.85)  108.258 ms  107.337 ms
 8  21459.tauc01.cu.telstraglobal.net (134.159.124.85)  107.330 ms unknown.telstraglobal.net (134.159.124.86)  101.459 ms  102.337 ms
 9  * unknown.telstraglobal.net (134.159.124.86)  102.324 ms  102.314 ms
10  * * *
11  54.240.192.107 (54.240.192.107)  103.016 ms  103.892 ms  105.157 ms
12  * * 54.240.192.107 (54.240.192.107)  103.843 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

It appears Telstra Global or AWS block the tracking of network path closer to the destination so I will ping to see how long the trip takes

bytes from ec2-##-##-##-##.ap-southeast-2.compute.amazonaws.com (##.##.##.##): icmp_seq=1 ttl=50 time=103 ms

It is obvious the longest part of the response to the client is not the GeoQuery on the MongoDB cluster or processing in NodeJS but the travel time for the packet and securing the packet.

My server locations are not optimal, I cannot move the AWS MongoDB to Singapore because MongoDB doesn’t have servers in Singapore and Digital Ocean don’t have servers in Sydney.  I should move my services on Digital Ocean to Sydney but for now, let’s see how far this Digital Ocean Server in Singapore and MongoDB cluster in Sydney can go. I wish I knew about Vultr as they are like Digital Ocean but have a location in Sydney.

Security

Secure (SSL) communication is now mandatory for Apple and Android apps talking over the internet so we can’t eliminate that to speed up the connection but we can move the server. I am using more modern SSL ciphers in my SSL certificate so they may slow down the process also. Here is a speed test of my servers cyphers. If you use stronger security so I expect a small CPU hit.

cipherspeed

fyi: I have a few guides on adding a commercial SSL certificate to a Digital Ocean VM and Updating OpenSSL on a Digital Ocean VM. Guide on configuring NGINX SSL and SSL. Limiting ssh connection rates to prevent brute force attacks.

Server Limitations and Benchmarking

If you are running your website on a shared server (e.g CPanel domain) you may encounter resource limit warnings as web hosts and some providers want to charge you more for moderate to heavy use.

Resource Limit Is Reached 508
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I have never received a resource limit reached warning with Digital Ocean.

Most hosts  (AWS/Digital Ocean/Azure etc) all have limitations on your server and when you exceed a magical limit they restrict your server or start charging excess fees (they are not running a charity).  AWS and Azure have different terminology for CPU credits and you really need to predict your applications CPU usage to factor in the scalability and monthly costs. Servers and databases generally have a limited IOPS (Input/Output operations a second) and lower tier plans offer lower IOPS.  MongoDB Atlas lower tiers have < 120 IOPS a second, middle tiers have  240~2400 IOPS and higher tiers have 3,000,20,000 IOPS

Know your bottlenecks

The siege HTTP stress testing tool is good, the below command will throw 400 local HTTP connections to your website.

#!/bin/bash
siege -t1m -c400 'http://your.server.com/page'

The results seem a bit low: 47.3 trans/sec.  No failed transactions through 🙂

** SIEGE 3.0.5
** Preparing 400 concurrent users for battle.
The server is now under siege...
Lifting the server siege.. done.

Transactions: 2803 hits
Availability: 100.00 %
Elapsed time: 59.26 secs
Data transferred: 79.71 MB
Response time: 7.87 secs
Transaction rate: 47.30 trans/sec
Throughput: 1.35 MB/sec
Concurrency: 372.02
Successful transactions: 2803
Failed transactions: 0
Longest transaction: 8.56
Shortest transaction: 2.37

Sites like http://loader.io/ allow you to hit your web server or web page with many hits a second from outside of your server.  Below I threw 50 concurrent users at a node API endpoint that was hitting a geo query on my MongoDB cluster.

nodebench50c

The server can easily handle 50 concurrent users a second. Latency is an issue though.

nodebench50b

I can see the two secondary MongoDB servers being queried 🙂

nodebench50a

Node has decided to only use one CPU under this light load.

I tried 100 concurrent users over 30 seconds. CPU activity was about 40% of one core.

nodebench50d

I tried again with a 100-200 concurrent user limit (passed). CPU activity was about 50% using two cores.

nodebench50e

I tried again with a 200-400 concurrent user limit over 1 minute (passed). CPU activity was about 80% using two cores.

nodebench50f

nodebench50g

It is nice to know my promised based NodeJS code can handle 400 concurrent users requesting a large dataset from GeoJSON without timeouts. The result is about the same as siege (47.6 trans/sec) The issue now is the delay in the data getting back to the user.

I checked the MongoDB cluster and I was only reaching 0.17 IOPS (maximum 100) and 16% CPU usage so the database cluster is not the bottleneck here.

nodebench50h

Out of curiosity, I ran a 400 connection benchmark to the node server over HTTP instead of HTTPS and the results were near identical (400 concurrent connections with 8000ms delay).

I really need to move my servers closer together to avoid the delays in responding. 47.6 served geo queries (4,112,640 a day) a second with a large payload is ok but it is not good enough for my application yet.

Limiting Access

I may limit access to my API based on geo lookups ( http://ipinfo.io is a good site that allows you to programmatically limit access to your app services) and auth tokens but this will slow down uncached requests.

Scale Up

I can always add more cores or memory to my server in minutes but that requires a shutdown. 400 concurrent users do max my CPU and raise the memory to above 80% so adding more cores and memory would be beneficial.

Digital Ocean does allow me to permanently or temporarily raise the resources of the virtual machine. To obtain 2 more cores (4) and 4x the memory (8GB) I would need to jump to the $80/month plan and adjust the NGINX and Node configuration to use the more cores/ram.

nodebench50i

If my app is profitable I can certainly reinvest.

Scale Out

With MongoDB clusters, I can easily clone ( shard ) a cluster and gain extra throughput if I need it, but with 0.17% of my existing cluster being utilised I should focus on moving servers closer together.

NGINX does have commercial level products that handle scalability but this costs thousands. I could scale out manually by setting up a Node Proxies to point to multiple servers that receive parent calls. This may be more beneficial as Digital Ocean servers start at $5 a month but this would add a whole lot of complexity.

Cache Solutions

  • Nginx Caching
  • OpCache if you are using PHP.
  • Node-cache – In memory caching.
  • Redis – In memory caching.

Monitoring

Monitoring your server and resources is essential in detecting memory leaks and spikes in activity. HTOP is a great monitoring tool from the command line in Linux.

http://pm2.keymetrics.io/ is a good node package monitoring app but it does go a bit crazy with processes on your box.

CPU Busy

Communication

It is a good idea to inform users of server status and issues with delayed queries and when things go down inform people early. Update: Article here on self-service status pages.

censisfail

The Future

UPDATE: 17th August 2016

I set up an Amazon Web Services ECS server ( read AWS setup guide here ) with only 1 CPU and 1GB ram and have easily achieved 700 concurrent connections.  That’s 41,869 geo queries served a minute.

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

AWS MongoDB Test

The MongoDB Cluster CPU was 25% usage with  200 query opcounters on each secondary server.

I think I will optimize the AWS OS ‘swappiness’ and performance stats and aim for 2000 queries.

This is how many hits I can get with the CPU remaining under 95% (794 geo serves a second). AMAZING.

AWS MongoDB Test

Another recent benchmark:

AWS Benchmark

UPDATE: 3rd Jan 2017

I decided to ditch the cluster of three AWS servers running MongoDB and instead setup a single MongoDB instance on an Amazon t2.medium server (2 CPU/4GB ram) server for about $50 a month. I can always upgrade to the AWS MongoDB cluster later if I need it.

Ok, I just threw 2000 concurrent users at the new AWS single server MongoDB server and the server was able to handle the delivery (no dropped connections but the average response time was 4,027 ms, this is not ideal but this is 2000 users a second (and that is after API handles the ip (banned list), user account validity, last 5 min query limit check (from MySQL), payload validation on every field and then  MongoDB geo query).

scalability_budget_2017_001

The two cores on the server were hitting about 95% usage. The benchmark here is the same dataset as above but the API has a whole series of payload, user limiting, and logging

Benchmarking with 1000 maintained users a second the average response time is a much lower 1,022 ms. Honestly, if I have 1000-2000 users queries a second I would upgrade the server or add in a series of lower spec AWS t2.miro servers and create my own cluster.

Security

Cheap may not be good (hosting or DIY), do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added self-service status page info and Vultr info

Short: https://fearby.com/go2/scalability/

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Scalability, Scalable, Security, ssl, VM Tagged With: api, digital ocean, mongodb, scalability, server

How to get started in programming

June 22, 2016 by Simon Fearby

Today I was asked: Should I learn to code in Swift and what can I build? Swift 1.0 (http://www.swift.org) was launched in 2014 by Apple and it is a multi-platform programming language for iOS, macOS and Linux (but not Windows). Swift has gone through rapid changes recently and Swift 4.0 is the latest stable version. Swift updates can break a number of previous swift coding standards set in Swift 1.x, 2.x and 3.x so don’t get comfortable.

Should a beginner learn Swift? Yes if you only want to develop apps for iOS and macOS and ignore Windows and Android platforms. Apple has released a learn to code in Swift app that will make learning swift much easier http://www.apple.com/swift/playgrounds/

Previously apple recommended developers use the Objective C language to code and compile apps for iOS and macOS. Objective C has been in production since 1983 and is very complex (loads of squiggles, square brackets and legacy classes).

Most smartphones and tablets run Android, not because they are better but because they are cheaper (that’s my opinion). A top of the line iPhone 6S+ costs $1500 where a reasonable Android phone will set you back about $130 to $400. I personally think Apple devices are faster, better and more secure but if you are developing you need to publish apps on android also.  Apple devices are supported for a lot longer than Android devices ( Android support lifetime v Apple iOS ).  Even the 5-year-old iPad 2 is getting the iOS 10 software update in September 2016.  In 2017 iOS 11 does not run on an iPad 2.

If you wanted to native develop android apps you would need to learn Java in the https://developer.android.com/studio/index.html IDE. Be prepared to be confused as the Android Studio has a steep learning curve.

Ok so where do beginners start.

What Companies look for when hiring programmers

https://youtu.be/QbSD4EtpVdY

Jumping right into Swift, Java, Objective C or Lua may not be a good idea if a plain old website will do. It depends on what you want to develop before you start coding. All developers should be able to knock up a website and database before jumping into making mobile apps. PHP ( http://www.php.net ) and MySQL ( http://www.mysql.com ) are good options for beginners making websites.

http://www.udemy.com and http://www.w3schools.com/ is a great place to go to learn more about coding. If you want to see what the Pro’s are doing check out http://www.sitepoint.com are great places for learning WHAT you need to know fast.

But I really want to develop a mobile app.

Development platforms like the Corona SDK http://www.coronalabs.com are a great option for beginners as it is easy to pick up and is super fast and supports eye-popping OpenGL animations and apps along with business apps. Corona allows you to code in a programming language called Lua ( https://www.coronalabs.com/learn-lua ) and compiles your app to the iOS/Android/macOS or Windows desktops. How cool is that.

Corona APp

Another possible solution is using the Electron technology

Corona wraps a common interface (API  https://docs.coronalabs.com/API/index.html) over each platforms API so your code calls the corona API and when you compile your app the platforms native API methods are called.

Corona has great guides and support pages:

https://docs.coronalabs.com/guide/programming/index.html – Getting Started

https://docs.coronalabs.com/guide/index.html –

https://coronalabs.com/blog – Keep up to date with Corona and read guides on many topics.

https://coronalabs.com/resources – Corona Resources.

https://www.youtube.com/user/CoronaGeek – Weekly Corona video podcast.

https://forum.coronalabs.com – Talk to hundreds of Corona developers and ask questions.

https://docs.coronalabs.com/api/index.html – Corona API

What tools do you need

  • A Mac Computer with a retina display.
  • Sublime text Editor https://www.sublimetext.com/3 (and Sublime to Corona Plugin https://coronalabs.com/products/editor/ )
  • Source Version Software http://www.zenaware.com/cornerstone
  • A good code snippet saving app is http://snippets.me/
  • Patience and drive.

Knowing what you want to develop will narrow down the technologies you need to learn.

Summary:

  • If you want to make websites learn HTML and PHP.
  • If you want to build business apps inside corporations learn Visual Studio.NET
  • If you want to make mobile apps fast learn Corona.
  • If you want to make advanced iOS apps learn Swift
  • If you want to make advanced Android apps learn Android Studio
  • If you want to make online database learn MySQL

Check out my guides here on:

How to build your first cross-platform mobile app with corona

Creating a development server for $5 a month

What is the difference between a website, app, web app, hybrid app and software?

..and many more free guides here.

Happy coding.

Still reading?  Check out the beginner guides on Sitepoint.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.2. Added short link

Sort: https:/fearby.com/go2/learn/

Filed Under: Apple, Cloud, CoronaLabs, Development, MySQL, Scalable, Security Tagged With: Android, build, code, corona, iOS, test

Computer hardware, clock cycles and code ramblings

April 18, 2016 by Simon Fearby

Modern computers have insane amounts of processing power compared to computers from 5 years ago. Computer memory and storage is cheap but that is no excuse to design and develop bloated webpages and apps. Consumers and customers are very impatient and there are loads of statistics on users abandoning an app or website because it takes more than three seconds to respond or load an app.

You can control the speed of software running on your home computer by upgrading it but you cannot guarantee the performance of apps that run on shared hosting platforms or web hosts.  You can buy a CPanel based web-host or a dedicated server from $5 a month, how can they make money? They do this by virtually hosting your service (web server etc) alongside other hosts and running multiple services on a single processor core. Shared servers are very economical but you are sharing the resources with other users.

If you want maximum performance you can always buy a dedicated server from a cloud server provider but each provider may secretly share the resource’s of that server (more information here: http://blog.cloudharmony.com/2014/07/comparing-cloud-compute-services.html ) and performance may be impacted. Dedicated servers can be very expensive and can run into thousands of dollars per month.

So what can I control?

Writing (or installing) good code is essential, try and optimize everything and know your server’s limitations and bottlenecks. To understand bottlenecks, you need to know about computer hardware. A few lines of code can trigger millions or billions of actions inside a processor.

A computer has the following major components:

  • Hard drive (HDD/SSD): This is where your operating system, software and files are stored when the computer is turned off. Hard drives store magnetic charges (0’s and 1’s) onto spinning round metal platters. A zero is a negative charge and a 1 is a positive charge. Hard drives spin at 5400~15,000 RPM. Data is written with a read/write needle that needs to be positioned over the data bit to read and write. Hard drives are very slow but reliable and each data bit can be read/written to tens of thousands of times. Faster solid-state drives don’t use spinning metal platters and work a bit like memory (see below). Solid-State drives have limited writes per sector though. Read More: https://en.wikipedia.org/wiki/Hard_disk_drive
  • Memory (RAM): Computer memory is basically a large array or very fast storage that the processor reads and writes data (0’s and 1’s). Memory is like a massive spreadsheet grid and accessing data from memory is 1000x faster than accessing data from a hard drive.  Memory stores data as static charges in silicon microchips and each storage bit can be changed millions of times. When a computer is turned off the memory is wiped. Read More: https://en.wikipedia.org/wiki/Computer_memory
  • Processor (CPU): This is the chip that does the primary calculations and controls just about everything. A processor can perform various predetermined functions and read and write to memory/hard drives or send data over a USB cable or network connection. Processors are quite dumb and it has to keep queues (pipelines) of things to do in it’s the internal cache (memory) between cycles.  A clock cycle is single step where the processor (and all of it’s cores) do one thing and get ready for the next clock cycle, all clock cycles in a software routine are linked and if one instruction fails all following linked instructions have to be cleared and dealt with or errors and blue screens can happen. A processors speed is a total of how may clock cycles it can perform in a second and a modern computer can process 3,500,000,000 (3.5 Ghz) cycles a second. A processor can calculate one complex instruction or multiple simple instructions in one cycle. Most processors have multiple cores that can each perform calculations in a clock cycle. But don’t be fooled many clock cycles are spent waiting for data to be read/written from memory/hard drive or loaded from the processor’s cache. A processors instruction pipeline has 4 main states for each possible action in a cycles execution pipeline (“Fetch”, “Decode”, “Execute” and “Write back”). (e.g The processor may be asked to add (fetch) variable1+variable2, the (decode) gets the values from memory, (execute) performs the calculation and “write back” writes the result back to memory. ) See a complete list of Intel instruction here and here ). Read More: https://en.wikipedia.org/wiki/Central_processing_unit

Processors are mostly waiting for data to be fetched to be processed.  There is no such thing as 100% efficient code.

If software needs to read a file from a spinning hard drive has a mandatory latency period (https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics ) where the hard drives read needle moves in or out and reads the data form the right sectors and returns the data.  A 3.5 Ghz computer has to wait for an approximate 19,460,000 clock cycles for a sector on a hard drive to be under the read head. The data still has to be moved from the hard drive through the processor and into memory.  Luckily processors have fantastic calculation branch prediction abilities ( https://en.wikipedia.org/wiki/Branch_predictor) and even though the software has asked for a file to be read the processor can work on 19 million other cycles before checking to see if the data has returned from the hard drive.

Caching content

One solution is to have software or servers cache certain files in memory to speed up the delivery of files. The latest DDR4 computer memory runs as blistering speeds of 2,400Mhz (2,400,000,000 cycles a second) so it should keep up with a 2.4Ghz computer? Memory is cheap and fast but computer memory has a huge limitation.  You can’t just ask memory to return the value of a memory cell and expect it in a few cycles. The processor has to essentially guide the memory module to activate the required electrical columns and rows to allow that that value to be read and return it to a processor. This is like a giving instruction to a driver over a phone, it takes time for the driver to listen, turn a corner, drive down a street and then turn another corner just to get to the destination.  The processor has to manage millions of memory read and writes a second. Memory can’t direct itself to the memory value, the processor has to do that.

Memory timings are called RAM timings and it is explained better here ( http://www.hardwaresecrets.com/understanding-ram-timings/ ).  It takes modern DDR4 memory module about 15 clock cycles to just enable the column circuit for a memory cell to be activated, then another 15 clock cycles to activate the row and a whole load of other cycles to read the data. Reading a 1 MB file from memory may take 100,000,000 clock cycles (and that is not factoring in the processor is working on other tasks. A computer process is a name given to software code that has been handed over to the processor, software code is loaded into the processor/memory as instructions and depending on the code and user interactions different parts of the software’s instructions are loaded into the processor. In any given second a computer program may enter and leave a processor over 1,000 times and processors internal memory is quite small.

Benchmarking

Choosing a good host to place your website/mobile app or API’s is very important, sometimes the biggest provider is not the fastest. You should benchmark how long actions take on your site and what the theoretical maximum limit is. Do you need more memory or cores? Hosts will always sell you more resources for money.

http://www.webpagetest.org/ is a great site to benchmark how long your website takes to deliver each part of your website to customers around the world.  You can minify (shrink) your code and images to reduce the processing time per page load.

If you are keen research PHP caching plugins like OpCache ( http://php.net/manual/en/book.opcache.php ), MemcahedD (https://www.digitalocean.com/community/tutorials/how-to-install-and-use-memcache-on-ubuntu-14-04) for PHP or MySQL  or WordPress WP-Total-Cache (https://wordpress.org/plugins/w3-total-cache/ ) plugin.

Placing your website or application databases close to your customers.  In Australia, it takes 1/5 of a second minimum for a server outside of Australia to respond.  A website that loads 30 resources would also add the delays between your server and customers (30×1/5 of a second add’s up).

Consider merging and minifying website resources ( http://www.minifyweb.com/ ) to lower the number of files and file sizes that you deliver to users. Most importantly monitor your website 24/7 to see if it is slowing down. I use http://monitis.com to monitor server performance remotely.

Summary

I hope I have not confused you too much. Try some videos below to learn more.

Good Videos: 

How a CPU Works:

How Processors are Made:

How a Hard Drive works in Slow Motion – The Slow Mo Guys

What’s Inside a CPU?

Zoom Into a Microchip (Narrated)

How computers work in less than 20 minutes

Read some of my other development-related guides here https://fearby.com/
Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Cloud, Development, Domain, Hosting, MySQL, Security, VM, Wordpress Tagged With: code, hard drive, memory, optimize, processor, solid state

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2022 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT