First and foremost, Christmas is not here yet. We have two more months before we have to sit down with our most annoying relatives. Phew.
But, if you’re in retail, you’ll know how important the next two months are. The next two months pay for the rest of the year. If they go well, you’ll be in great stead for 2018. If they go badly, you might get your P45 in your Christmas card.
If you have physical stores, you might be advertising temporary vacancies to help cope with the rush. Good thinking, Batman.
Your online store might not have the capacity it needs.
There are four factors that influence the capacity of your online store:
If you use Google Analytics, (1) and (2) are really easy to find so you should definitely do this now.
Log in and select your store and go to Audience > Overview.
Drop down under Overview. You’ll probably see Sessions selected. Choose Pageviews.
On the right hand side, choose Hourly.
Then, hover over the graph and you’ll see the number of pageviews per hour.
Select your busiest period. This might be your Black Friday or Christmas sale from last year. Find the busiest hour or the busiest day.
To make the maths easy, let’s say there were 36,000 pageviews in that hour. There are 3600 second in an hour, so divide that number by 3600 to get your pageviews per second. In our example, we arrived at 100.
We then need to adjust this number for your projections this year. If you’re spending more on advertising this year, you might want to add 20%. If you’re spending less, you might want to subtract 20%. It’s better to overshoot than to undershoot, though, and it’s worth adding a buffer in case your estimate is a bit out.
The number you arrived at is the number of pageviews per second that your store and hosting need to be able to accommodate so your Christmas card contains an awful joke and not a P45 (I’m not selling this well, am I?)
Armed with your pageviews per second, your developer and hosting company (or Coherent… just sayin’) will be able to run tests to determine if you’re already in good stead or if you need to make changes so your online store doesn’t fall flat when you most need it up and running.
The photocopier is broken again. Ugh. At least their phone number’s on my recent calls list from yesterday…
.. Ring ring .. For sales, press 1 ..
.. This is really adding insult to injury ..
.. For accounts, press 2 ..
.. I need to remember the menu options for next time ..
.. For support, press 3 ..
.. At last! ..
.. We record all our calls ..
.. That’s great. So do we. …
.. All of our agents are busy .. doo doo .. doo be doo .. Your time is valuable to us ..
Sound familiar? Of course it does. But that doesn’t mean there isn’t a better way.
Self-service portals and apps are huge win-win opportunities for you and your clients. Your customers can get the information they need when they need it (even if they’re photocopying their face at 4am after a classic work night out). You don’t have to employ people, to help to one person at a time, who probably knows what they need and doesn’t want to have to phone in.
That person might even click a button that the customer could have clicked themselves. Efficient, huh?
This sort of workflow belongs in the ’90s. You just have to let go!
That photocopier problem could have been solved with an online troubleshooter. It could also have been solved with an in-app chat that would allow one staff member to help several customers at once (some companies take this to the extreme which is why they seem to have just woken up). In-app chat would also let the customer take photos of the problem. How much quicker would that be?
Whatever systems you have in place now might be great for what they do but if customers can’t help themselves then there is a tremendous opportunity to improve your customer experience and lower costs at the same time. It’s what we do as software developers. We create software that could become part of your website, or could be a mobile app, to plug into whatever systems you have now and deliver extra functionality.
Even if your current systems are decades old black boxes, even if you’ve never worked with software developers before, we know our stuff and we can make it easy for you to deliver the service your customers expect. Call us!
Bureaucracy isn’t just the domain of large corporates. Even as a young, hip software development company, we have it. In this post, I want to describe how we use automation and bespoke software in our own workflow.
Systems we use
We use several off the shelf systems:
Harvest integrates nicely with Xero (which is why we use it) but apart from that, there’s little integration between our systems.
Sometimes it doesn’t matter much. Integrating our phone system with our accounting system wouldn’t help much. We had a few challenges, though.
The General Data Protection Regulation (GDPR) is all over the news. Whilst the deadline of the 25th of May 2018 for UK businesses to be compliant might seem like a long way away, many businesses will have to make substantial changes before then to avoid hefty penalties. It is hard to overstate the differences between the already complex and important Data Protection Act (DPA) and the GDPR and the impact it will have on businesses and consumers.
Who does it apply to?
The GDPR will have an impact on every business and consumer with any EU presence. It “upgrades” the protections enshrined in law by the DPA for today’s digital world, thereby offering more protection to consumers and placing more responsibilities on businesses.
What about Brexit?
The Government has, with unusual (!) clarity, stated that the changes in the GDPR will continue to apply after Brexit.
What is it?
Bespoke software is a very broad descriptor that encompasses a large range of products. Essentially, it is any custom built piece of programming featuring sets of instructions, which enables specific tasks to be carried out automatically.
For example, consider a used-car dealership. To keep up with today’s market, at the very least the dealership will need to have a well designed website listing all available vehicles and details. Now imagine a customer visiting this website. What will make them more likely to get in touch? Some applications we might include in our software are:
It is a fairly common and easy job for us to develop websites with this functionality. But there are many more advanced operations we can incorporate.
Market research and advertising
Billing and accounting
Server management and security
Essentially, what develop automated systems and integrate all these processes so you don’t have waste time re-entering data, or performing long-winded menial tasks like manually inputting every transaction into your accounts. You have all the information you need at the click of a button.
We believe that almost all businesses can save substantial labour time by automating processes like those mentioned. This saves you from spending money on additional administrative staff, and frees you up to focus on your areas of expertise and passion.
As your online presence grows, you may start to consider the pros and cons of buying/renting hardware and using cloud services for your hosting. All of these options have pros and cons and this article will provide our view on the realities of this decision. We offer cloud servers and dedicated servers – find out more about hosting with us.
If you buy hardware, you can get exactly what you want. Choose the chassis, choose the motherboard and then start adding more components. If you need a storage server, buy a large chassis and add lots of disks. If you need a compute-intensive server, buy a small chassis and put a 4 socket motherboard in it. The choices are endless.
This is an excellent choice for those whose infrastructure spans a large number of servers. Why? Because buying 100 servers has economies of scale over buying 1 (and buying 1000 has economies of scale over buying 100). Intel openly publish the “tray price” of their CPUs (i.e. the cost to buy them in bulk) and you probably won’t find those prices at any retailers. It’s the same for other components too.
It’s not just the immediate economies of scale though. There are also economies of scale co=locating your servers. A rack is cheaper than 46 co=location packages and also gives you physical access (so you don’t have to pay for remote hands). You can co=locate tools and spares, for which many co=location providers charge extra. You can connect your servers over a private network and reduce your bandwidth cost, especially if you co=locate a backup server and do backups over the private network.
Renting hardware can offer almost the same level of control as buying hardware. Sure, you don’t get physical access, but many large dedicated server providers now have very advanced control panels where you can add extras, take remote control of your server, reprovision your server and so on so there isn’t always a need for physical access.
Not owing the hardware might seem like a bad financial decision (if you draw a parallel to renting a home) but in reality it often isn’t. Consider the following hypothetical yet realistic example where the server provider can take advantage of their economies of scale. Of course these economies of scale don’t apply if you need an usual server configuration.
Owning: £5000 upfront (needs upgrading after 3 years), £50/month to co-locate, 5 remote hands requests at £75 each and 3 disk failures at £100 each. Total: £7475.
Renting: £200 per month all inclusive. Total: £7200.
It’s also a lot more reassuring than worrying about who will be around to do a drive replacement in a faraway city in the middle of the night.
There are several myths about cloud, its performance, its reliability and its scalability. Put simply, cloud is great for small deployments but as you start to approach the need for a dedicated server, you won’t be able to do the same with the cloud at a similar cost. There is a very simple reason for this:
To provide a large cloud server with a certain specification, a quality provider will usually allocate resources on a physical server with slightly more than your virtual server specifications (so you’re paying for more than you can use). This is because the software that creates the virtual server uses resources too. That software is often the cause of a lot of problems, too. Virtualisation software can be highly complex, needing experienced systems administrators 24×7 to deal with mishaps, further adding to the cost. There are software solutions out there to take away some of the complexity but most of these are also very complex themselves and require a lot of investment and room for price variation due to vendor tie-in. It is for this reason that I would advise anyone in the market for an important cloud service not to rely on a company whose whole virtual infrastructure is dependent on their business relationship with a particular vendor. We have known a particular vendor to increase the price of their software tenfold where affected clients were told via a mailing list.
Again, cloud is great for small setups but with the added complexity, you won’t get the performance/cost ratio of a dedicated server.
Claims are often made about the cloud’s scalability and reliability. Scalability may be useful where you need to grow significantly very quickly, then back again, often. This is a rare occurrence and most often, if you compare the cost of a dedicated server to match your largest demand with a cloud server that offers the scalability you need, you’ll find the former is more cost effective. Discussions with one hosting company revealed that one client’s obsession with scalability pays for their whole cloud infrastructure.
Reliability is a hot topic too. Many cloud technologies claim to have automatic failover resulting in essentially zero downtime. CloudHarmony has impartial, accurate uptime statistics for all of the major clouds and all of the major clouds have more downtime than a many dedicated servers. This again, is due to the complexity. More software, means more complexity, means points of failure. Interestingly, many of the cloud providers that people associate with reliability (because of their marketing and higher prices), have uptime similar to, or worse than, the cheaper providers.
Compare this with a dedicated server. We tend to recommend Kimsufi for a cheap place to store backups. Our 10 euro-per-month Kimsufi has been up for over a whole year without any downtime at all. If you need more reassurance than that, look at the same company’s enterprise servers, which are still reasonably priced and come with financially-backed SLAs, RAID and redundant power/network supplies.
Your accounting system is at the heart of your business. Whether you use Sage, Xero, Quickbooks or something else, there is a wealth of data about your suppliers, customers, financials and KPIs in it. One of the key advantages of Xero specifically, is that it’s cloud based. Being cloud based makes it easier to link it with other systems and to get data out of it in your preferred format.
However, as your accounting package, Xero or otherwise, can’t encapsulate everything you do, it’s common to have other systems with overlapping data. Maybe you use Office365 or Google Apps (Gsuite) for the address book but when a customer moves to new premises, you have to update both systems.
Advantage 1: Keeping Xero in sync
Bespoke software development opens the door to synchronising data in Xero with any other system. This includes Xero’s official integrations – but also proprietary systems, legacy systems and systems that don’t officially connect with Xero. Bespoke software can help you to keep using your other systems – or to use a system that doesn’t officially integrate with Xero – and keep them in sync.
Advantage 2: Reporting
The wealth of data in Xero, Sage and other accounting systems can be hard to extract in the format you want. Let’s say that your KPIs include:
Your accounting package holds the keys to this data – but you might have to download a spreadsheet from it, copy and paste in other data from other sources and do some manual analysis. Whilst there is something to be said for manual analysis, there is also a lot to be said for keeping up to date. Bespoke software can provide you with a single pane of glass for your important metrics, presented how you want them to be presented.
Advantage 3: If-this-then-that
Aspects of what you do will undoubtedly be complex. However, it’s likely that some of your processes could be encapsulated by a simple order of events. For example, “when the Xero invoice has been paid, send the equipment to the customer”. Keeping abreast of when each invoice has been paid and ensuring that the correct information is conveyed to shipping could be time consuming. However, this sort of repetitive work can often be simplified or automated entirely by software.
Cloud is increasingly popular due to its low upfront and perceived reliability. I wrote an article a few weeks ago about why it can be a sensible choice, but needs to be considered carefully.
One of the main drawbacks of cloud (VPS / VDS / VM / cloud server / IaaS / whatever you want to call it – it’s generally the same thing) – is the CPU power. The physical servers that host virtual servers are very easy to add disks and memory to, and since the incorporation of solid state disks into cloud offers, disk IO is no longer a primary concern either. However, the ratio of CPU resources to memory and disk resources can be rather limiting. Consider the following hypothetical but realistic scenario, supported by anecdotal evidence and server suppliers’ hardware specifications:
2x Xeon E5-2620V3 (12 cores, 24 theads)
12 x 512GB SSD in RAID10 = 3TB usable space
Let’s package that into VMs by dividing it:
0.375 access to one CPU thread!
As you can see, the disk and memory seem plausible but the CPU seems dismal. Of course, this will vary from provider to provider – and you won’t see “0.375 of 1 CPU thread” but more likely something like “shared CPU”.
It can get worse, however, as many providers will oversubscribe the resources on the physical servers – i.e., they will sell more resources than they have, knowing that generally, the physical server will cope ok.
What does this mean for the customer? It means that you need to think carefully about CPU, because it’s the most expensive aspect of cloud, the most necessary for many common cloud applications and the hardest to scale and often, to get solid information on. It reinforces the argument in my earlier post that cloud is great for small deployments, but needs to be weighed very carefully against dedicated options for larger deployments.
The technically interesting bit
A grand challenge in cloud is not just gathering generalised, historical performance data but knowing how my application will perform when I deploy it. There does not exist a definitive solution to this challenge yet, however, there are ways of getting some idea of how it is performing now. Utilities like top, which systems administrators are familiar with, provide some input and have changed in recent years to accommodate the prevalence of shared resources.
Systems administrators will probably know the first 5 columns of the bottom row very well – they are the most commonly used and, generally, are very useful indicators of how a system is performing. They rely on what is called, tick-based CPU accounting, where, every CPU tick (the atomic unit of CPU resources) is categorised based on how the kernel decides it should be used, and a quick calculation is made to arrive at percentages. The kernel uses a scheduler to manage the CPU resources efficiently when there are potentially far more applications running than there are CPU threads available, by switching them in and out.
You may be interested to know that the unit of time between CPU scheduler decisions is, in technical parlance, called a jiffy and that the algorithm used to schedule CPU resources in most modern Linux systems is based heavily on the Brain Fuck Scheduler, written by an anaesthesiologist turned computer scientist, who liked to argue on forums.
Anyway, the very last column, “st”, stands for “steal” time. This is supposed to account for CPU time that is “stolen” by the virtualisation software on the physical server and is therefore completely unavailable to the kernel on the virtual server. The virtual server isn’t really aware of why it isn’t available but in a virtual system, it’s safe to assume that it’s probably the result of noisy neighbours and high contention.
Steal time is not easy to measure, as virtual servers are dependent on the virtualisation software on the hypervisor for the resources that would otherwise be physical – that includes the obvious ones like storage devices, network devices and so on, but also the simple act of keeping time, which is an integral part of the system. Thus, the number that you see under the “st” column depends not only on contention but also the virtualisation technology. Literature suggests that IBM’s virtualisation technology allows for CPU time to be accounted for extremely accurately, followed by Xen, followed by the others that are less effective. In our Proxmox setup, no steal time was reported even under very high load and oversubscription
We often work with clients on speeding up their website. Great results are achievable but focus needs to be put on the right areas as there are a myriad of reasons why your website could be slow. Recently, we worked with a client to speed up a customised WordPress website by implementing heavy caching with Nginx and extensive tuning for HTTP2 and Google Pagespeed. The result is that real visitors at a significant geographical distance can see the page in less than 0.6s and Google Page Insights and GTMetrix are both very happy. However, the extensive nature of the work to get every possible millisecond for as many visitors as possible, highlighted several important points.
Myth #1: Small images should be combined into sprites
A few years ago this was absolutely true. Today, almost every major browser supports HTTP2. This means that, between a server configured to use HTTP2 and a browser with HTTP2 support, several files can be downloaded simultaneously over one connection. Furthermore, this is faster than downloading one large file.
Myth #2: Load x first, then y, then z
Myth #3: You need to spend more on hosting
I am disappointed that website speed is so closely aligned with server size and hosting costs. We have successfully moved high traffic websites to smaller servers and made them faster at the same time. Part of it is the configuration of the server, that should aim to make efficiency savings, and part is the software itself. If your software: WordPress, Magento or anything else, is badly coded (think about plugins in particular – the ones you downloaded without paying much attention to ;)) then the additional hosting costs to achieve a better speed can be very very high. It is better to deal with the problems in the code so that the server doesn’t have to do as much work.
Myth #4: You need a CDN
I am generally in favour of using a CDN. However, it is not the secret sauce that it is sometimes made out to be. A good rule of thumb, is if your website is fast for local visitors but slow for international visitors (and you have enough international visitors for this to be a problem), then you should consider a CDN. Be aware that CDNs introduce more complexity as they need to cache your content so your website needs to tell the CDN when content has changed, and needs to reference the CDN on every page. A CDN is relatively unlikely to improve the speed of low traffic website or one where the geographic distance between the server and the users is already fairly small.
Myth #6: Google Page Insights will tell me how fast my website is
Tools like Google Pagespeed, Pingdom Tools and GTMetrix need to be interpreted properly. I despise the fact that they make it so easy for website owners to see what appear to be problems. Google Pagespeed does not take into account HTTP2 much, if at all. There is debate as to whether Google rankings correlate to this tool but as for real visitors, I wouldn’t think about it much. GTMetrix is a very advanced tool that takes a lot of different factors into account – it is a very good tool. However, again, despite its bells and whistles, it needs to be interpreted properly. Your score on GTMetrix is not always representative of how users see your website.
On a side note, Qualys SSL Labs test also needs to be interpreted properly. This test gives you some (actually very good) information about the security of your HTTPS configuration. However, what is neglected to mention, is that not having a top score is not necessarily a problem. Why? Because in some cases, using the latest ciphers will cause problems for users in locations where, for example, Windows XP is still widely used (there are some!). Equally, a very strong HTTPS setup can damage the performance of your website as more CPU power is needed to handle crytography
Myth #7: Always enable Gzip compression
Following on from my aside about HTTPS encryption using CPU, gzip compression uses CPU power too. If your website is busy and your server’s bottleneck is the CPU, enabling gzip will take CPU resources away from the server’s normal activities and have a good chance of making it slower. If you simply want to tick the “gzip?” box on online tools, which I discourage until there is substantial evidence that Google rankings depend on this, enable it at level 1, the lowest level. Level 1 is substantially better than nothing for page size, and not that much different to any other level relatively speaking. It does, however, use much, much less CPU time to compress and extract.