Amazon Web Services (AWS) finally launches a Sydney data centre

With the recent launch of the AWS Asia Pacific Sydney Region, we are now able to run a completely local cloud hosting environment.
AWS already counts 10,000 customers based in Australia and New Zealand, a substantial proportion of which are expected to move their workloads to the Sydney facility to gain better local performance or reduce regulatory risk.
Physically located at the Equinix SY3 Sydney IBX Data Center, which makes up the second APAC deployment of Amazon infrastructure (alongside the Singapore facility). It will cater for customers tempted by the cloud computing giant's scale and easy self-serve model, but restricted in their choice of provider by data sovereignty requirements.
The Inexorable Move To The Cloud
What we are witnessing is the final realisation of an idea that has been a long time in the making - the maturation of computing to the point of it being a utility service - just like your gas, phone and electricity. Something talked about, anticipated, occasionally attempted, but finally reaching fruition.
Amazon operates a global online business based on massive volumes and razor thin margins. A meticulous level of operational discipline, and a high level of automation were required in its IT infrastructure and processes to drive the business profitably. This at a scale that very few business could even comprehend.
At some point, Amazon recognised this knowledge and these skills themselves had incredible value in their own right.
In extending these systems and providing them to third parties they have started us along a long dreamed of path, and opened up an opportunity that may at some point eclipse the value of the original business which it was built to support.
So What Is AWS Anyway?
Amazon's cloud infrastructure consists of a small number of facilities using best of breed technologies and processes. Combined with massive economies of scale, it allows millions of end consumers to pool resources and amortise computing costs. There is no secret sauce, as it were. Amazon uses the same physical infrastructure, operating systems, virtualisation products and so forth as are available to you. Which is why Amazon cannot ensure a single provisioned instance of a server stays up any more then you can. The true value of the entire AWS stack is the pragmatism shown in assembling it. They've provided the tools needed to achieve service uptime in the simplest way using the conventional technology currently available.
They provide this power conveniently pre wrapped, tested, ready to use, at scale, supported, accessible from a self serve digital kiosk, with all the conveniences and benefits encapsulated. You also benefit from the lessons learned in building systems that can sustain a company way bigger than your own - don't underestimate the value of this.
Using AWS, you declaratively design your data center, from the individual internal networks, the routing rules between them, the VPN that allows external access, to the individual servers in each network and possibly what operating system and applications are on them. Of course, when you do this, nobody is physically moving or manually configuring anything, but you get the same result - that's the power of virtualisation and why its a giant leap in efficiency.
The Cost Efficiencies Of Virtualisation
Another thing to bear in mind is: of the costs involved, none of it is capital expenditure. You rent the resources you need as and when you need them. There's no need to be forced to anticipate computing load and pay if you are wrong (in unused infrastructure on one side and unhappy customers on the other).
A properly designed application can be scaled across virtual instances as load increases and scaled down when not (overnight for example). In extreme cases, environments can be set up and torn down on an hourly basis. So you can start quickly, end quickly, and in between pause (by extension your costs as well) or scale down, or up, as your heart desires. Your own needs may be staggered, but the aggregate computing needs of all clients, across all timezones, is far more consistent.
Thats the other advantage of pooling resources - someone else is using your excess capacity when you are not, so a smaller number of physical servers can satisfy everyone's aggregate demand, rather than everyone having their own under-utilised servers, sized to handle a brief spike in peak demand.
Virtualisation is not new, but Amazon's offering provided the first large scale implementation with 'critical mass' in breadth and depth. They are starting to have competition, sure, and there are a number of projects providing the tools to assemble a 'private cloud'. But unless you have specific requirements, a private cloud is too much hassle (and a recent Forrester survey suggests that of the companies that attempted to build private cloud infrastructure with a provisioning portal like Amazon AWS, only 10% succeeded in the task). Private clouds use private infrastructure as well - which is something we'd ideally like to avoid from a cost perspective.
Cloud Computing Is Not For Me
For those who feel cloud computing is not relevant to their industry - AWS has clients in diverse industries including financial services and health, where data security is paramount - but you would be surprised how much interest even they show in cloud computing.
Incidentally, some pundits have said that Amazon aren't exactly going out of their way to correct the murmurings that whole industry sectors cannot or will not use public cloud infrastructure in its current guise. Amazon are well aware of who is using their cloud infrastructure and may be keeping quiet to extend their first mover advantage.
Obviously the non availability until recently of a local data center presence in Australia has stymied adoption by virtue of legal mandate in data residency (no offshore storage or transit), but with the opening of Amazon's Sydney data center we have no doubt a tidal wave is about to crash over the way we've traditionally done corporate computing.
Change Is Coming Whether You Like It Or Not
Disruptive change is not uniform, it doesn't even distribute itself in a statistical normal 'bell curve' fashion, it rather behaves more like an earthquake - when things happen they happen abruptly, interspersed with less drastic but ever improving continual optimisation and refinement. We are in the very early stages of utility computing, and you could say it's the early adopters dipping their toes in. But whatever hesitation holding adopters back due to real or perceived issues, because of the massive efficiencies gained, a small trickle will at some point result in an avalanche of adoption by companies no longer prepared to wait when they know they are starting the race at a significant competitive disadvantage to those who have made the leap. At that point, the days of silo corporate computing will come to an end, for the vast bulk of companies. I believe we will look back in hindsight and marvel at how quickly it happened.
It may be that, bearing in mind the critical nature of the system, in many countries a general regulatory standards and enforcement regime is put in place to ensure acceptable levels of data security and resilience are applied (similar to those applied to financial institutions now). These things are yet to play, or are currently playing, out. Initial experience, however, is good. Amazon's AWS cloud offering is the same base upon which Amazon's significant businesses are run, so they have much to lose themselves. The AWS S3 object storage service now holds in excess of a trillion objects, and not one has been lost since the service's inception - giving rise to Amazon's claim, of 11 nines reliability.
Corporate computing is finally going the way of other utilities. It has reached a point in its maturity where it's best handled by specialist organisations which facilitate efficiencies of scale. Just like electricity - what you actually do with that power is where your competitive advantage lies.
We're at a pretty exciting nexus.