Microsoft debuts container-like architecture for cloud

Microsoft is trying to push more cloud-friendly architectures

Microsoft is trying to push more cloud-friendly architectures

Microsoft has announced Azure Service Fabric, a framework for ISVs and startups developing highly scalable cloud applications which combines a range of microservices, orchestration, automation and monitoring tools. The move comes as the software company looks to deepen its use of – and ties to – open source tech.

Azure Service Fabric, which is based in part on technology included in Azure App Fabric, breaks apart apps into a wide range of small, independently versioned microservices, so that apps created on the platform don’t need to be re-coded in order to scale past a certain point. The result, the company said, is the ability to develop highly scalable applications while enabling low-level automation and orchestration of its constituent services.

“Service Fabric was born from our years of experience delivering mission-critical cloud services and has been in production for more than five years. It provides the foundational technology upon which we run our Azure core infrastructure and also powers services like Skype for Business, InTune, Event Hubs, DocumentDB, Azure SQL Database (across more than 1.4 million customer databases) and Bing Cortana – which can scale to process more than 500 million evaluations per second,” explained Mark Russinovich, chief technology officer of Microsoft Azure.

“This experience has enabled us to design a platform that intrinsically understands the available infrastructure resources and needs of applications, enabling automatically updating, self-healing behaviour that is essential to delivering highly available and durable services at hyper-scale.”

A preview of the service will be released to developers at the company’s Build conference next week.

The move is part of a broader architectural shift in the software stack powering cloud services today. It’s clear the traditional OS / hypervisor model is limited in terms of its ability to ensure services are scalable and resilient for high I/O applications, which has manifested in among other things a shift towards breaking down applications into a series of connected microservices – something which many equate Docker and OpenStack with, among other open source software projects.

Speaking of open source, the move comes just days after Microsoft announced MS Open Tech, the standalone open source subsidiary of Microsoft, will re-join the company, in a move the company hopes will drive further engagement with open source communities.

“The goal of the organization was to accelerate Microsoft’s open collaboration with the industry by delivering critical interoperable technologies in partnership with open source and open standards communities. Today, MS Open Tech has reached its key goals, and open source technologies and engineering practices are rapidly becoming mainstream across Microsoft. It’s now time for MS Open Tech to rejoin Microsoft Corp, and help the company take its next steps in deepening its engagement with open source and open standards,” explained Jean Paoli, president of Microsoft Open Technologies

“As MS Open Tech rejoins Microsoft, team members will play a broader role in the open advocacy mission with teams across the company, including the creation of the Microsoft Open Technology Programs Office. The Programs Office will scale the learnings and practices in working with open source and open standards that have been developed in MS Open Tech across the whole company.”

Flexiant’s Cloud Freedom for DevOps @Flexiant | @DevOpsSummit [#DevOps]

DevOps tasked with driving success in the cloud need a solution to efficiently leverage multiple clouds while avoiding cloud lock-in. Flexiant today announces the commercial availability of Flexiant Concerto. With Flexiant Concerto, DevOps have cloud freedom to automate the build, deployment and operations of applications consistently across multiple clouds. Concerto is available through four disruptive pricing models aimed to deliver multi-cloud at a price point everyone can afford.

read more

10 Best Windows Blogs to Bookmark

Last week, we shared our top picks for the best Mac blogs out there. This week, we’re continuing the trend, only this time, we’re listing our favorite blogs and websites focused on Windows: The Office Blogs and The Official Microsoft Blog While neither of these are likely surprising, they are top picks for a reason—Microsoft […]

The post 10 Best Windows Blogs to Bookmark appeared first on Parallels Blog.

AWS bolsters GPU-accelerated instances

AWS is updating its GPU-accelerated cloud instances

AWS is updating its GPU-accelerated cloud instances

Amazon has updated its family of GPU-accelerated instances (G2) in a move that will see AWS offer up to times more GPU power at the top end.

Announced on the tail end of 2013, AWS teamed up with graphics processing specialist Nvidia to launch the Amazon EC2 G2 instance, a GPU-accelerated instance specifically designed for graphically intensive cloud-based services.

Each Nvidia Grid GPU offers up to 1,536 parallel processing cores and give software as a service developers access to higher-end graphics capabilities including fully-supported 3D visualization for games and professional services.

“The GPU-powered G2 instance family is home to molecular modeling,  rendering, machine learning, game streaming, and transcoding jobs that require massive amounts of parallel processing power. The Nvidia Grid GPU includes dedicated, hardware-accelerated video encoding; it generates an H.264 video stream that can be displayed on any client device that has a compatible video codec,” explained Jeff Barr, chief evangelist at AWS.

“This new instance size was designed to meet the needs of customers who are building and running high-performance CUDA, OpenCL, DirectX, and OpenGL applications.”

The new g2.8xlarge instance, available in US East (Northern Virginia), US West (Northern California), US West (Oregon), Europe (Ireland), Asia Pacific (Singapore), and Asia Pacific (Tokyo), offers four times the GPU power than standard G2 instances including: 4 GB of video memory and the ability to encode either four real-time HD video streams at 1080p or eight real-time HD video streams at 720P; 32 vCPUs; 60 GiB of memory; 240 GB (2 x 120) of SSD storage.

GPU virtualisation is still fairly early on in its development but the technology does open up opportunities for the cloudification of a number of niche applications in pharma and engineering, which have a blend of computational and graphical requirements that have so far been fairly difficult to replicate in the cloud (though bandwidth constraints could still create performance limitations).

European Commission to reform mobile cloud services regulations – report

The EC is looking to create a level playing field in how telcos and mobile cloud service providers are regulated

The EC is looking to create a level playing field in how telcos and mobile cloud service providers are regulated

The European Commission is considering plans to reform how mobile cloud service providers, also know as Over The Top (OTT) companies, are regulated, according to reports from the FT.

Draft documents unveiled by the commission indicate that initiative to create a level playing field between the telecoms industry, cable operators and mobile cloud services like Whatsapp and Skype has long since been forgotten.

According to the Commission, telcos are currently being forced to compete with OTT services “without being subject to the same regulatory regime”, and that it intends to create a “fair and future-proof regulatory environment for all services”.

One of the main directives of the digital single market proposals advocated by the commission relates to the roll-out of superfast broadband infrastructure across the continent. With traditional revenue streams for telcos, such as calls and messaging, on the decline, operators frequently point the finger at OTT services for enabling free and wide-reaching services.

As a consequence, operators claim a lack of incentive when it comes to investing in overhauling  increasingly depreciated copper network infrastructure, particularly around the last mile.

That said, telcos remain hesitant to give its competitors free access to high-speed broadband infrastructure if it isn’t able to suitably monetise the service, which is where net neutrality enters the picture. Aside from the ongoing debate raging in the US of late, net neutrality formed one of the cornerstones of Neelie Kroes’ digital single market proposals, along with the abolishment of consumer roaming fees.

Last month, Telecoms.com reported that the European Union’s Telecoms Council effectively conceded that a U-turn on its net neutrality ambitions was on the cards. There has yet to be an update on whether the open-letter signed by more than 100 MEPs has convinced the Council to steer clear of paid prioritisation of any kind.

It is believed the commission intends to unveil its new digital single market strategy on the 6th May.

Taipei Computer Association, Government launch Big Data Alliance

TCA, government officials launching the Big Data Alliance in Taipei

TCA, government officials launching the Big Data Alliance in Taipei

The Taipei Computer Association and Taiwanese government-sponsored institutions have jointly launched the Big Data Alliance, aimed at driving the use of analytics and open data in academia, industry and the public sector.

The Alliance plans to drive the use of analytics and open data throughout industry and government to “transform and optimise services, and create business opportunities,” and hopes big data can be used to improve public policy – everything from financial management to transportation optimisation – and create a large commercial ecosystem for new applications.

The group also wants to help foster more big data skills among the domestic workforce, and plans to work with major local universities to train more data and information scientists. Alliance stakeholders include National Taiwan University, National Taiwan University of Science as well as firms like IBM, Far EasTone Telecommunications and Asus, but any data owners, analysts and domain experts are free to join the Alliance.

Taiwanese universities have been fairly active in partnering in partnering with large incumbents to help accelerate the use of big data services. Last year National Cheng Kung University (NCKU) in southern Taiwan signed a memorandum of understanding with Japanese technology provider Futjistu which saw the two organisations partner to build out a big data analytics platform and nurture big data skills in academia.

NTT Com subsidiary RagingWire launches California datacentre

RagingWire claims the new facility gives it the largest datacentre in California

RagingWire claims the new facility makes it the owner of the largest datacentre in California

RagingWire Data Centers, a subsidiary of Japanese telecoms giant NTT Com has cut the ribbon on its latest datacentre, known as California Sacramento 3 or CA3.

RagingWire is among a number of incumbents (Alibaba, Time Warner, Equinix) to bolster their cloud presence in the state as of late.

The 180,000 square foot facility packs 14 megawatts of power and 70,000 square feet of server space. It is located and fully integrated with the company’s 500,000 square foot datacentre campus in Sacramento, which includes two other datacentres (CA1 and CA2); the company said when combined the campus creates the largest datacentre in the state of California (680,000 square feet).

“Today is a big day for RagingWire, our customers, and our partners,” said George Macricostas, founder, chairman, and chief executive of RagingWire Data Centers. “The CA3 data center is the next step in RagingWire’s U.S. expansion and a new component for the global data center portfolio of NTT Communications. CA3 is truly a world-class datcentre.”

The move marks another big expansion for NTT Com, which together with its subsidiaries operates over 130 datacentres globally. The company said the latest facility is aimed at companies operating in the Bay Area and Silicon Valley.

“RagingWire has been a strategic addition to the NTT Communications family of companies. We look forward to working with you to deliver information and communications technology solutions worldwidem,” said Akira Arima, chief executive of NTT Communications.

WebRTC, Internet of Things and API Gateways | @ThingsExpo [#IoT #WebRTC]

The evolution of JavaScript and HTML 5 to support a genuine component based framework (Web Components) with the necessary tools to deliver something close to a native experience including genuine realtime networking (UDP using WebRTC). HTML5 is evolving to offer built in templating support, the ability to watch objects (which will speed up Angular) and Web Components (which offer Angular Directives). The native level support will offer a massive performance boost to frameworks having to fake all these features like Polymer and Angular. It will also encourage people who are not familiar with these next generational frameworks to get in on the action. As I am from a gaming background then I always complain that TCP (Web Sockets) is not genuinely real-time, so I look forward to seeing UDP (WebRTC) solutions being delivered like Desktop Sharing in Chrome 34.

read more

4.7 Billion Mobile WebRTC Devices by 2018 | @ThingsExpo [#WebRTC]

Even though Apple and Microsoft haven’t commented on the new open source technology which delivers high quality audio and video capabilities to desktop and mobile browsers, major carriers such as AT&T and Telefónica, leading infrastructure providers like Alcatel-Lucent and Ericsson and new WebRTC application providers in likes of Teledini and NetDev are driving the technology forward.
Research analyst, Sabir Rafiq comments, “WebRTC brings many opportunities; ABI Research believes major trends will start to form within the enterprise market with WebRTC. Companies will be willing to implement the new technology to aid productivity and reduce communication barriers within the workplace.”
ABI Research recognizes that there are significant barriers in the way of WebRTC technology becoming widely adopted. Firstly, Apple is not showing any interest in WebRTC, similar to its approach to Adobe Flash. As it is a brand leader in the mobile space, this could impact short term opportunity. Microsoft’s alternative to WebRTC, the CU-RTC, could deter users away from WebRTC and should essentially be considered a competitor. In the end, we believe Apple and Microsoft will not ignore the market opportunity for WebRTC.

read more