Archivo de la categoría: Datacentre

Web companies create their own problems on Black Friday – research

online shopping cartThe artificially created surge in demand for online bargains on Black Friday could be damaging many brands, according to a recent report by hosting giant Rackspace.

Though Black Friday falls on a national holiday in the US, its lack of cultural significance in the UK hasn’t failed to prevent it become an online retail landmark. Companies ranging from Amazon to John Lewis to Orange have created one day, limited offer shopping deals to generate demand surges, with customers flooding both their online and in store outlets on Black Friday.

On Friday traffic analyst Traffic Defender live blogged that the John Lewis online store was unavailable and gave a low down on the performance of a variety of online players.

With e-commerce loyalty at an all time low, according to Rackspace, these artificially engineered spikes in traffic and public interest in cloud computing performance may be counter productive to any cloud based service provider.

In a Rackspace survey of 2,000 consumers, 39% of respondents recognised that websites failure is due to poor construction and maintenance. The vast majority (83%) of UK consumers claim a consistently slow or unavailable websites negatively affects their brand loyalty. Almost one in five consumers (18%) would only wait 10 seconds or less for a website or page to load before they would abandon their search and look elsewhere.

“This Black Friday, ecommerce loyalty is at an all-time low. Consumers now have a vast choice of retailers available to them online, so it’s easy for them to change their mind about which shop to spend their hard earned cash with,” said Paul Bolt, VP of Technology Practices at Rackspace.

The ‘inevitable website traffic spikes’ and ‘uncompromising demand from customers’, however, could contribute to the tarnishing of a brand, warned Bolt, “There really is no place for outages.” If companies are going to risk their brand image for a single day of busy trading they must put the cloud foundations in place, Bolt argued.

Clouds across Europe powered by wood, water and nuclear fission

datacentre cloudTwo differing approaches to powering the cloud with renewable energy have been unveiled this week.

In northern Russia a new datacentre facility in Udomlya is to power the 10,000 racks that support the cloud using nuclear fission, in order to generate the 80 MW needed to power the facility. Meanwhile, Luxembourg-based colocation provider LuxConnect is to power its new Tier IV data centre in Bettembourg with a wood burner.

The two data centres illustrate the differing approaches to powering the cloud. According to LuxConnect business development manager Claude Demuth it is becoming increasingly important for service providers, that use datacentre facilities to host their cloud services, to demonstrate that their electricity is powered by a sustainable source.

Until recently, LuxConnect met this commitment by purchasing credits for power generated from water driven turbines in Norway.  While the power used in their datacentre is not the very same power fed into the grid in Norway, the credits can be exchanged for a local source of power and LuxConnect was still credited as a user of sustainable power. However the Luxembourg government suggested that the new facility should use local renewable energy from biomass.

In response LuxConnect has built its own plant to burn waste wood from pallets, timbers and old furniture. The released energy is converted into electricity which will run the new data centre’s power and cooling. The bio mass burning plant has been built across the road from the data centre and connects via underground pipes.

Meanwhile in Russia, according to news agency Telecom Daily, nuclear power operator

Rosenergoatom, which runs ten nuclear power plants with 33 reactors, is to supply the Udomlya. According to reports it has offered Facebook and Google space on the upcoming campus, in order to help the American companies comply with new data residency laws.

Samsung unveils 128GB DDR4 memory modules for datacentres

Samsung 128GB RAMSamsung Electronics says it is mass producing memory modules for datacentre and enterprise servers that could turbo charge cloud services.

It has published details, in a blog of double data rate-4 (DDR4) memory in 128-gigabyte (GB) modules. These, when installed in enterprise servers and data centres, could significantly speed the rate of processing in cloud computing applications, slashing response times, boosting productivity and raising the quality of service.

The new modules use TSV (which stands for ‘through silicon via’), which is an advanced chip packaging technology that vertically connects DRAM chip dies using electrodes that penetrate the micron-thick dies through microscopic holes. Samsung first used this when it introduced its 3D TSV DDR4 DRAM (64GB) in 2014. TSV is used again in this new dual inline memory module (RDIMM) which, claims Samsung, opens the door for ultra-high capacity memory at the enterprise level.

The 128GB TSV DDR4 RDIMM is comprised of a total of 144 DDR4 chips, arranged into 36 4GB DRAM packages, each containing four 20-nanometer (nm)-based 8-gigabit (Gb) chips assembled with TSV packaging technology.

Unlike conventional chip packages, which interconnect die stacks with wire bonding, the TSV packages interconnect through hundreds of fine holes and vertically connected by electrodes passing through the holes. This creates a massive improvement in signal transmission speeds. In addition the Samsung’s 128GB TSV DDR4 module has a special data buffer function that improves module performance and lowers power consumption.

As a result servers can reach 2,400 megabits per second (Mbps), roughly twice their normal speed at half the power usage. Samsung says it’s now accelerating production of TSV technology to ramp up 20nm 8GB DRAM chips to improve manufacturing productivity.

“We will continue to expand our technical cooperation with global leaders in servers, consumer electronics and emerging markets,” said Joo Sun Choi, executive vice president of Memory Sales and Marketing at Samsung Electronics.

AWS launches EC2 Dedicated Hosts feature to identify specific servers used

amazon awsAmazon Web Services (AWS) has launched a new service for the nervous server hugger: it gives users knowledge of the exact server that will be running their machines and also includes management features to prevent licensing costs escalating.

The new EC2 Dedicated Hosts service was created by AWS in reaction to the sense of unease that users experience when they never really know where their virtual machines (VMs) are running.

Announcing the new service on the company blog AWS chief evangelist Jeff Barr says the four main areas of improvement would be in licensing savings, compliance, usage tracking and better control over instances (AKA virtual machines).

The Dedicated Hosts (DH) service will allow users to port their existing server-based licenses for Windows Server, SQL Server, SUSE Linux Enterprise Server and other products to the cloud. A feature of DH will be the ability to see the number of sockets and physical cores that are available to a customer before they invest in software licenses. This improves their chances of not overpaying. Similarly the Track Usage feature will help users monitor and manage their hardware and software inventor more thriftily. By using AWS Config to track the history of instances that are started and stopped on each of their Dedicated Hosts customers can verify usage against their licensing metrics, Barr says.

Another management improvement is created by the Control Instance Placement feature, that promises ‘fine-grained control’ over the placement of EC2 instances on each Dedicated Host.

The provision of a physical server may be the most welcome addition to many cloud buyers dogged by doubts over Compliance and Regulatory Requirements. “You can allocate Dedicated Hosts and use them to run applications on hardware that is fully dedicated to your use,” says Barr.

The service will help enterprises that have complicated portfolios of software licenses where prices are calculated on the numbers of CPU cores or sockets. However, Dedicated Hosts can only run in tandem with AWS’ Virtual Private Cloud (VPC) service and can’t work with its Auto Scaling tool yet.

Equinix connects AWS direct to data centres in Dallas and London

Equinix LD6Data centre operator Equinix has added an Amazon Web Services (AWS) Direct Connect facility in its Dallas data centre and data centres in its London International Business Exchange (IBX).

The AWS Direct Connect facility means that companies using Equinix data centres can connect their privately owned and managed infrastructure directly to AWS, it claims. The arrangement creates a private connection to the AWS Cloud within the same infrastructure. This ‘hard-wiring’ of two infrastructures in the same building can cut costs and latency, while boosting throughput speeds and ultimately creating better application performances, Equinix says. These two offerings bring the total number of Equinix data centres offering a Direct Connect (to AWS) to 10.

The service is a response to increasing demand from clients for hybrid clouds. Equinix says it can configure this in its own data centres, through direct interconnection of the public cloud provider’s kit and the equipment belonging to clients. This Equinix-enabled hybrid is an instant way to achieve the scalability and cost benefits of the cloud, while maintaining the security and control standards offered by an on premise infrastructure.

Equinix claims that a recent study, Enterprise of the Future, found that by 2017 hybrids will double in enterprise cloud computing. According to its feedback from a study group, 84% of IT leaders will deploy IT infrastructure where interconnection, defined as direct, secure physical or virtual connections, is at the core, compared to 38% today.

London is the second Equinix location in Europe, after Frankfurt, to get an AWS Direct Connect arrangement. It means that customers can get “native” connections to AWS Cloud offerings, whereas previously they tethered from Equinix in London into AWS’s Dublin facilities. Equinix’s Dallas IBX, DA5, is the fourth data centre in North America to offer AWS Direct Connect, joining Equinix’s facilities in Seattle, Silicon Valley and Washington. Equinix now offers AWS Direct Connect in ten global locations; Dallas, Frankfurt, London, Osaka, Seattle, Silicon Valley, Singapore, Sydney, Tokyo and Washington, D.C./Northern Virginia. Equinix customers in these areas experience lower network costs into and out of AWS and take advantage of reduced AWS Direct Connect data transfer rates.

AWS launches new wind farm in green drive

Wind farmAmazon Web Services has contracted green energy specialist EDP Renewables to build and run the 100 megawatt (MW) Amazon Wind Farm US Central in Ohio.

The project is due to complete by May 2017 when it will begin producing enough to power run, 29,000 average US homes for a year, it claims. AWS says the latest addition to its green energy stable would generate 320,000 MWh of electricity a year from a new wind farm in the US.

Amazon also claims the energy generated will feed the electrical grid supplying both current and future AWS cloud data centres.

In November 2014 AWS committed to running its infrastructure entirely on renewable energy – in the long term – and claimed that 25% of the electricity running its cloud services was green. By the end of 2016 it aims to have pushed the proportion to 40%.

Earlier this year it announced a renewable project with the Amazon Wind Farm (Fowler Ridge) in Indiana could generate 500,000 MWh of wind power annually. In April it then began a pilot project using Tesla’s energy storage batteries to power data centres at times when wind power and solar are not available. In the same month AWS joined the American Council on Renewable Energy and the US Partnership for Renewable Energy Finance to work with government policy makers on developing more renewable energy options.

In June 2015 it said its new AWS Solar Farm in Virginia could generate 170,000 MWh of solar power annually and a month later it added another wind farm in North Carolina that could generate 670,000 MWh a year. In total, AWS claims to have the potential to create 1.6 million MWh of power.

“We continue to pursue projects that help to power AWS data centres and bring us closer to achieving total renewable energy,” said Jerry Hunter, VP of Infrastructure at AWS.

Google launches virtual machine customisation facility

Google cloud platformGoogle has announced a new more fitting way of buying virtual machines (VMs) in the cloud. It claims the extra attention to detail will stamp out forced over purchasing and save customers money.

With the newly launched beta of Custom Machine Types for Google’s Compute Engine, Google promised that it will bring an end to the days when “major cloud buyers force you to overbuy”. Google has promised that under its new system users can buy the exact amount of processing power and memory that they need for their VM.

The new system, explained in a Google blog, aims to improve the experience for customers when buying a new virtual machine in the cloud. Google says it wants to replace the old system, where users have to choose from a menu of pre-configured CPU and RAM options on machines that are never quite adjusted right to fit the user. Since VMs usually come in multiples of two, Google explained, customers frequently have to buy eight CPUs, even when they only need six.

The Custom Machine Types system will let users buy virtual CPU (vCPU) and RAM in smaller units (Gigibytes rather than Gigabytes) and give customer more options to adjust the number of cores and memory as needed. If a customer’s bottom line expands, the cloud can be ‘let out’ accordingly. In another tailoring option, Google has introduced smaller units of charging (with per-minute billing) in a bid to create more accurate metering of the customer’s consumption of resources.

In the US every vCPU hour will cost $0.03492 and every GiB of RAM will cost $0.00468 per hour. The price for Europe and Asia, however, is a slightly higher rate $0.03841 per vCPU hour. Rates will decrease on bulk purchasing however.

Support is available in Google’s command line tools and through its application programming interface (API) and Google says it will create a special graphical interface for its virtual machine shop in its Developer Console. Developers can specify their choice of operating system for their tailored VM, with the current options being CentOS, CoreOS, Debian, OpenSUSE and Ubuntu.

Meanwhile, elsewhere in the Google organisation, it is working with content deliverer Akamai Technologies to reduce hosting and egress costs and improve performance for Akamai customers taking advantage of Google Cloud Platform.

Equinix cleared to buy Telecity but must sell London, Amsterdam and Frankfurt facilities

datacentreThe European Commission has approved the proposed acquisition of data centre operator Telecity by rival Equinix. However, to assuage anti competition concerns, Equinix had to agree to sell off a number of data centres in Amsterdam, London and Frankfurt.

BCN reported in May that Equinix and TelecityGroup agreed to the $2.35bn takeover in which US-based Equinx would buy all issued Telecity shares. The acquisition gives Equinix a stronger presence in the UK and would extend its footprint into new locations with identified cloud and interconnection needs including Dublin, Helsinki, Istanbul, Milan, Stockholm and Warsaw. Equinix provides colocation services in 33 metropolitan areas worldwide. Telecity operates data centres in 12 metropolitan areas in the European Economic Area (EEA) and Turkey.

However, the activities of Equinix and Telecity overlap in the four EEA metro areas of Amsterdam, Frankfurt, London and Paris.

In a statement issued by the EC Commissioner in charge of competition policy Margrethe Vestager said the growing economic importance of cloud services makes it crucial to maintain competition between data centres. However the deal does not necessarily stifle competition, Vestager said. “The Commission is satisfied that the commitments offered by Equinix will ensure that companies continue to have a choice for hosting their data at competitive prices,” said Vestager.

The Commission has concerns that the concentration of data centres controlled by one vendor could lead to higher prices of colocation services in the Amsterdam, London and Frankfurt metropolitan areas. The remaining competitors in these areas are unlikely to be able to match the competitive pressure currently exercised by Telecity, it had concluded, and new players would have faced significant difficulties to enter the market due to the high investment and deployment times needed.

To address the Commission’s concerns, Equinix submitted commitments, offering to divest a number of data centres in Amsterdam, London and Frankfurt.

Red Hat launches Cloud Access on Microsoft Azure

redhat office logoRed Hat has followed its recent declaration of a partnership with Microsoft by announcing the availability of Red Hat Cloud Access on Microsoft Azure.

The Access service will make it easier for subscribers to move any eligible, unused Red Hat subscriptions from their data centre to the Azure cloud. Red Hat Cloud Access will give them the support relationship they enjoy with Red Hat with the cloud computing powers of Azure, the software vendor said on its official blog. Cloud Access extends to Red Hat Enterprise Linux, Red Hat JBoss Middleware, Red Hat Gluster Storage and OpenShift Enterprise. The blog hints that more collaborations with Microsoft are to come.

Meanwhile, in his company blog Azure CTO Mark Russinovich gave a public preview of the coming Azure Virtual Machine Scale Sets offering. VM Scale Sets are an Azure Compute resource that allow users to create and manage a collection of virtual machines as a set. These scale sets are designed for building large-scale services targeting big computing, big data and containerized workloads, all of which are increasing in significance as cloud computing evolves, said Russinovich.

By integrating with Azure Insights Autoscale, they provide the capacity to expand and contract to fit requirements with no need to pre-provision virtual machines. This allows users to match their consumption of computing resources to their application needs more accurately.

VM Scale Sets can be controlled within Azure Resource Manager templates and they will support Windows and Linux platform images, as well as custom images and extensions. “When you define a VM Scale Set, you only define the resources you need, so besides making it easier to define your Azure infrastructure, this also allows Azure to optimize calls to the underlying fabric, providing greater efficiency,” said Russinovich. “To deploy a scale set, all you need is an Azure subscription.”

Example Virtual Machine Scale Set templates are available on the GitHub repository.

Microsoft to invest $2bn in Euro cloud infrastructure

datacentre cloudMicrosoft has announced plans to invest $2bn on building the infrastructure across Europe to support nationally-based data centres supporting national cloud services. It means that commercial cloud services can be run from the UK and other major European countries, allaying some data sovereignty and Safe Harbour fears.

From late 2016 Microsoft Azure and Office 365 will be generally available from local UK-based data centres, CEO Satya Nadella said at an event in London. A locally supported service for Microsoft Dynamics CRM Online will follow in 2017. Microsoft will also offer Azure ExpressRoute to provide customers with the option of a private connection to the cloud.

The new local Microsoft cloud regions are designed to ally fears over data residency for customers in the UK. Once the infrastructure is in place Microsoft will be able to replicate data within the UK for backup and recovery.

Services delivered from these UK data centres will not only improve sovereignty and cut latency but create new opportunities for Microsoft UK’s 25,000 channel partners, said Michel Van der Bel, the general manager of Microsoft UK. The UK is a global leader in using cloud-based systems with an adoption rate of 84% claimed Van der Bel.

“Our commitment to run our cloud services from local data centres will help meet demand from those who want their cloud systems based in the UK,” said Van der Bel. Customers and partners who can innovate will grow with the power of the cloud, he said, and now they can meet the strict regulations of the banking, financial services and public sectors.

Microsoft also announced completion of the latest phase of data centre facility expansion in Ireland and the Netherlands, which serve as cloud computing hubs for European customers.

One channel partner said the local data centres for local people plan is an overdue step in the right direction.

“This is great news for the UK and the technology sector as a whole,” said Avanade UK’s General Manager Julian Tomison, “questions around data residency aren’t new, but at least now we have a new solution.”

It’s good for the channel and the cloud industry said Tomison. “The investment validates the sector and will have a positive impact on the cloud industry as collectively customers will feel they have more control. Having data centres in the UK helps us stay competitive when prices and services are becoming uniform. I predict more investments like this as legislation like last month’s Safe Harbour ruling shape the legal situation here.”