Parallels adds new resources for UK channel partners

UK Channel Partners With nearly 20 years of experience in the IT industry, Steve Wilson is a seasoned veteran in the IT world. Recently, Steven has taken his talents to Parallels, where he will be joining the UK team to reinforce and boost the channel development in the region. John Leahy, the head of sales […]

The post Parallels adds new resources for UK channel partners appeared first on Parallels Blog.

[video] A Millennial Approach to Storage | @CloudExpo @HGSTStorage #Cloud

Creating replica copies to tolerate a certain number of failures is easy, but very expensive at cloud-scale. Conventional RAID has lower overhead, but it is limited in the number of failures it can tolerate. And the management is like herding cats (overseeing capacity, rebuilds, migrations, and degraded performance). Download Slide Deck: ▸ Here
In his general session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for the HGST Cloud Infrastructure Business Unit, discussed how a new approach is necessary, one that supports the attributes of the cloud with the millions of applications and users depending on it for their business and personal lives. Object storage is the millennial approach to cloud-based data storage, archival, retrieval and cost. Why? Because it delivers significantly higher data reliability and allows virtually unlimited expansion for storage. Perfect for a hash-tagging, selfie-taking, always communicating and collaborating in an on-demand world.

read more

How Information Security Threats Have Evolved | @CloudExpo #Cloud #Security

Information security has become a critical priority for many businesses over the past decade, and for good reason. It seems like a new breach is exposed on nearly a daily basis, impacting another organization and its patrons. However, some companies believe that they’re safe because they’re either too small or too big to be affected by any of these cyberattacks. The truth is that groups of all sizes from Target to your local dentist are being hacked or having their data compromised, and it’s causing a major upheaval in the security community.

read more

[slides] Offline-First Apps with PouchDB | @CloudExpo @IBMCloudant #Cloud

It’s easy to assume that your app will run on a fast and reliable network. The reality for your app’s users, though, is often a slow, unreliable network with spotty coverage. What happens when the network doesn’t work, or when the device is in airplane mode? You get unhappy, frustrated users. An offline-first app is an app that works, without error, when there is no network connection.
In his session at 18th Cloud Expo, Bradley Holt, a Developer Advocate with IBM Cloud Data Services, discussed how offline-first apps built with PouchDB and Cloudant Sync can provide better, faster user experiences by storing data locally and then synchronizing with a cloud database when a network connection is available.

read more

Parallels Events: Meet the team at these IT events!

Parallels Events! Booking your IT calendar for the next quarter? You can find the Parallels team at various IT industry events regularly. We’re always here to talk about Macs, Virtualization, Remote Application, Mac Management, and more – but take the chance to pick our brains in person at these Parallels Events! July 26-28: BriForum – […]

The post Parallels Events: Meet the team at these IT events! appeared first on Parallels Blog.

[video] Virtual Machine Awareness with @Tintri | @CloudExpo #Cloud #Storage #Virtualization

“Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V,” explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.

read more

IoT, cloud, and the logical progression to everything ‘as a service’

(c)iStock.com/mattjeacock

The growth of all things connected and of all things cloud seems unstoppable, and monikers are jumping to the logical end…that the Internet of Things will be the Internet of Everything and that the ‘as a service’ model will morph into ‘everything as a service’.

On-demand products have always had a place in business. For more than forty years, mainframe time has been sold ‘as a service’ where users only pay for what they need when they need it. At a time when businesses are growth hacking their way to larger and larger client bases and new companies are coming online every day, the on-demand model provides the perfect balance of functionality and value.

XaaS is defined – by TechTarget – as “a collective term said to stand for a number of things including “X as a service,” “anything as a service” or “everything as a service.”” XaaS could just as easily refer to CaaS (communication as a service) as it could PaaS (platform as a service). As long as the service is delivered over the internet and not installed locally, it meets the “aaS” requirements.

Service game

The “aaS” list just keeps growing:

  • Infrastructure-as-a-Service (IaaS): hardware, software, servers, storage and all of the associated maintenance is handled by someone else. Generally, this means that many of the tasks are automated by the provider and that both monitoring and systems management will require more time.
  • Desktop as a service (DaaS): the infrastructure of VDI (virtual desktop infrastructure) is hosted in the cloud. It is a lower cost alternative to VDI yet offers many of the same features: data storage, backup, security and upgrades.
  • Disaster recovery as a service (DRaaS): While data is sent via the cloud, it does not necessarily remain stored there in DRaaS solutions. The provider may offer cloud-storage or the data may be sent to a server to lie in wait until disaster strikes. Businesses that utilise DaaS often rely upon it to function as a disaster recovery plan instead of enlisting a separate service.

Benefits of ‘as a service’ models

What is it about the “aaS” model that has businesses trading assets for memberships? In a word: flexibility. Businesses are tired of carrying technical debt and instead want their technology choices to remain as agile as the rest of their practices. The lure of a plug-and-play solution is that it eliminates the need for in-house support personnel while ensuring that tools and support are available on a moment’s notice.

Perhaps even more attractive is the flexibility to choose the right solution provider at the right moment. No longer will large monopolies of “compatible” or “preferred partners” rule decision-making. Instead, businesses have the freedom to choose the provider with the best set of features for the best price.

The future…as a service

Technology startups of today will never have known the limitations of release specific software. The frustration of buying a software suite only to have a new release hit the shelves days after a major investment is made. While these businesses are young by today’s standards, according to HfS, “the make-up of the Fortune 500 in five years’ time will be very different from todays.

One of the key reasons for this is the rapid evolution of new business [is that] all their services are in the cloud and their entire infrastructure is delivered to them on a seamless “as-a-Service” model.” HfS founder, Phil Fersht goes on to write that these businesses will see significant cost and speed-to-market advantages thanks to the ‘as a service’ model.

Google’s trans-Pacific submarine cable enters into service today

GoogleA consortium of tech giants including Google and NEC has completed the construction and end-to-end testing of a new trans-Pacific submarine cable system, reports Telecoms.com.

The 9,000km FASTER Cable System enters into service today (30 June), and is claimed to be the first cable system designed from the outset to support digital coherent transmission technology, using optimized fibers throughout the submarine portion. The cable system lands in Oregon in the United States and two landing points in Japan, Chiba and Mie. The team claim the cable will be able to deliver 60 Terabits per second (Tbps) of bandwidth across the Pacific.

“From the very beginning of the project, we repeatedly said to each other, ‘faster, Faster and FASTER,’ and at one point it became the project name, and today it becomes a reality,” said Hiromitsu Todokoro, Chairman of the FASTER Management Committee. “This is the outcome of six members’ collaborative contribution and expertise together with NEC’s support.”

The consortium includes China Mobile International, China Telecom Global, Global Transit, Google, KDDI and Singtel, of which Google has been one of the most vocal. On the official blog, Google said the new cable will help the team launch a new Google Cloud Platform East Asia region in Tokyo.

The new data centre in Tokyo is part of Google’s ambitions to dominate cloud computing and other enterprise service offerings. While it is generally considered to be ranked third in the public cloud stakes, with AWS and Microsoft Azure out ahead, it has been making strides in recent months. Alongside the Tokyo data centre launch, another was opened in Oregon, and there are plans for a further ten over the course of 2017.

Google has been investing in submarine cables since 2008, initially with the 7.68Tb trans-Pacific Unity cable, which came online in 2010. The completion of the project now takes the number of Google-owned undersea cables up to four, though there are likely to be more added in the coming years.

“Today, Google’s latest investment in long-haul undersea fibre optic cabling comes online: the FASTER Cable System gives Google access to up to 10Tbps (Terabits per second) of the cable’s total 60Tbps bandwidth between the US and Japan,” said Alan Chin-Lun Cheung, a Google Submarine Networking Infrastructure.

“We’ll use this capacity to support our users, including Google Apps and Cloud Platform customers. This is the highest-capacity undersea cable ever built — about ten million times faster than your average cable modem — and we’re beaming light through it starting today.”

Ericsson claims a world first with transcontinental 5G trial

Ericsson, Deutsche Telekom and SK Telecom have announced a partnership to deploy world’s first transcontinental 5G trial network, reports Telecoms.com.

The objective of the agreement will be to provide optimized end-user experiences by providing consistent quality of services and roaming experiences for advanced 5G use cases with enhanced global reach. Ericsson will act as the sole supplier to the project, which will include technologies such as NFV, software defined infrastructure, distributed cloud, and network slicing.

Last October, Ericsson and SK Telecom conducted a successful demonstration of network slicing technology, which featured the creation of virtual network slices optimized for services including super multi-view and augmented reality/virtual reality, Internet of Things offerings and enterprise solutions.

“5G is very different from its predecessors in that the system is built as a platform to provide tailored services optimized for individual customer’s needs, at a global scale,” said Alex Jinsung Choi, CTO at SK Telecom. “Through this three-party collaboration, we will be able to better understand and build a 5G system that can provide consistent and enhanced user experience across the globe.”

Alongside the announcement, Ericsson and SK Telecom also successfully completed a demonstration of 5G software-defined telecommunications infrastructure, using the vendors Hyperscale Datacenter System (HDS) 8000 solution. The pair claims this is a world-first and will enable dynamic composition of network components to meet scale requirements of 5G services.

Software-defined telecommunications infrastructure is one of the enablers of network slicing, which will allow operators to create individual virtualized environments which are optimized for specific users. The demonstration itself focused on two use cases; ultra-micro-network end-to-end (E2E) slicing for personalized services, and ultra-large-network E2E slicing for high-capacity processing.

“SDTI is an innovative technology that enhances network efficiency by flexibly constructing hardware components to satisfy the infrastructure performance requirements of diverse 5G services,” said Park Jin-hyo, Head of Network Technology R&D Center at SK Telecom.

Finally, Ericsson has announced another partnership with Japanese telco KDDI with the ambition of delivering IoT on a global scale and providing enhanced connectivity services to KDDI customers.

The partnership will focus on Ericsson’s cloud-based IoT platform to deliver services such as IoT connectivity management, subscription management, network connectivity administration and flexible billing services. The pair claims the new proposition will enable KDDI’s customers to deploy, manage and scale IoT connected devices and applications globally.

IoT represents a significant opportunity for enterprise customers and operators alike, as it significantly increases the amount of data available and also access points to customers worldwide. Research firm Statista estimates the number of devices worldwide could exceed 50 billion, though the definition of what a connected device is or what an IoT connected device is varies.

“KDDI has for a long time been committed to building the communication environment to connect with world operators in order to support the global businesses of our customers,” said Keiichi Mori, GM of KDDI’s IoT Business Development Division. “We believe that by adopting DCP, we will be able to leverage Ericsson’s connection with world carriers and furthermore promote our unified service deployment globally to customers as they start worldwide IoT deployments.”

Hybrid cloud usage in the enterprise is assured – so what needs to be done from here?

(c)iStock.com/DundStock

A new research report from Veritas Technologies has found that while almost three quarters of enterprise adopt multiple private and public cloud strategies, the need for greater security and information management is vital.

As an information management software provider, this conclusion is hardly the most surprising from Veritas. Yet the statistics about continued hybrid cloud usage are worth the entrance fee. The manufacturing industry is the most assured in terms of migrating to the public cloud with almost a third (30%) of workloads there, compared with telecommunications (24%), healthcare (23%), financial (23%) and public sector (16%), with Japan and Brazil leading the way geographically.

38% of workloads today exist in a private cloud and 28% in a public cloud; numbers which are expected to go up at rates of 7% and 18% respectively over the coming 12 months. While cost remains the primary driver and security the main inhibitor to moving to the public cloud, for more than a quarter of respondents, backup and recovery (28%), disaster recovery (27%), and data warehousing (26%) would always remain on-premises.

With the move to public cloud being described as ‘messy’ and heterogeneous, more than four in five (81%) say they rely on service providers for help with implementation, as well as ongoing operations.

“This research underlines the current state of the hybrid cloud world,” said Simon Jelley, VP product management at Veritas. “This world is more – not less – heterogeneous, which can mean increasing complexity from an information management perspective. Organisations must be more vigilant than ever in identifying IT blind spots and potential security risks to avoid unplanned downtime or an information crisis.”

The Veritas research is perhaps more optimistic than other recent reports; a study from VMTurbo earlier this month found more than half of organisations polled did not have a multi-cloud strategy in place.