Parallels 10 for 10: Our Team’s Must-Have Playlists

Despite the numerous personalities present at Parallels, there’s one thing everyone who works here has in common—we love working to music! Walk by anyone’s desk, and you’ll usually see a pair of headphones handy (or someone who’s already jamming while hard at work). Who do you rock out to while you’re at work? We asked […]

The post Parallels 10 for 10: Our Team’s Must-Have Playlists appeared first on Parallels Blog.

Parallels 10 for 10 Giveaway!

It’s our birthday this week, so we’re giving away our favorite tech prizes! (As well as offering this awesome offer for Parallels Desktop 10.) We’re giving a prize away each day for 10 days, in honor of Parallels Desktop 10. How to Enter It’s easy to enter to win—just tweet us @ParallelsMac and tell us […]

The post Parallels 10 for 10 Giveaway! appeared first on Parallels Blog.

Amazon Web Services announces next generation EC2 instances

(c)iStock.com/zakokor

Amazon Web Services (AWS) has announced M4 instances for its EC2 cloud, adding another selection of compute instances to an already well-established list.

The M4 instances will deliver processing power with custom 2.4 GHz Intel Xeon E5-2676 Haswell processors, and aim to provide lower network latency and jitter – the variation between packets arriving – through Enhanced Networking. M4 also offers dedicated bandwidth to Amazon Elastic Block Store (EBS).

AWS claims M4 instances are most likely to suit a wide variety of applications, such as relational and in-memory databases, as well as gaming servers.

“Amazon EC2 provides a comprehensive selection of instances to support virtually any workload, and we continue to deliver new technologies and high performance in our current generation instances,” said Matt Garman, AWS VP for EC2 in a statement. “With these capabilities, M4 is one of our most powerful instances types and a terrific choice for workloads requiring a balance of compute, memory, and network resources,” he added.

AWS customers can launch M4 instances using the AWS Management Console, AWS Command Line Interface, AWS SDKs, as well as third party libraries.

The latest instances add to AWS’ ecosystem, but also add to the complexity of instances on offer. As a result companies like 2nd Watch, which manages more than 10,000 AWS instance for enterprise, make value out of consulting. Recent figures from the company revealed EC2 remained the most popular AWS service, with 98% of customers using it, just ahead of S3 (97%).

Zacks.com, an analyst house, argued following the launch of M4: “AWS is the biggest public cloud in the market. But the competition in the cloud market is intensifying, and so is the cloud storage war between Microsoft and Google. But amid this war, we remain extremely positive about AWS’s growth prospects. The latest launch is basically an added feather to its cap.”

You can find out more about M4 instances here.

Melting the big data avalanche through copy data virtualisation

(c)iStock.com/baranozdemir

The volume of data within companies is growing day by day due to new data – most of which comes from the uncontrolled proliferation of data copies. This avalanche of data is a great challenge for businesses having to manage it efficiently and securely. So, where does this copy data come from?

Copy data is redundantly generated copies of production data for purposes such as backup, disaster recovery, test and development, analytics, snapshots or migrations. According to IDC, companies might face up to 120 copies of certain production data in circulation.

In addition, IDC estimated in a 2013 study that companies spend up to $44 billion worldwide managing redundant copies of of data. According to IDC, 85% of the storage hardware investments, and 65% in storage software, are owed to data copies. Their management in the company is now taking more resources than the actual production data. Therefore, IT departments are faced with the question of how to control data growth caused by redundant copies through cost-effective data management. This applies both to companies that hold data in their own house and data centre operators.

Ban the data flood with copy data virtualisation

The virtualisation of data copies has proven to be an effective measure to take data management to the next level. By further integration of global data de-duplication and optimisation of the network utilisation, very efficient data handling is possible. Since less bandwidth and storage is required, very short recovery times can be achieved.

A possible principle is the use of a so-called “Virtual Data Pipeline”. It is a distributed object file system in which the fundamentals of data management – copying, storing, moving and restoring – are virtualised. In this way, virtual copies can be time-specific data from the collection of unique data blocks at any time. If the data must be restored, the underlying virtual object file is then extracted and analysed on a user-defined recovery point in any application format. Since the recovered data can be mounted directly on a server; no data movement is required at all, which contributes to extremely fast recovery times. The recovered data is immediately available.

More efficiency in data handling

The Virtual Data Pipeline technology is used to collect, manage and provide data as efficiently and effectively as possible. After creating and storing a single complete snapshot, only changed blocks of the application data are captured using Change Block Tracking with an incremental-forever principle. Data is collected at the block level, as this is the most efficient way to track and transfer changes. Since data will always be used in its native format, it is beneficial to store it in its native format. This way there is no need to create or restore data from backup files. The data can be both managed and accessed more efficiently.

Another angle of Copy Data Virtualisation is the possibility to capture data on the basis of SLAs that are set by the administrator. These include the frequency of the snapshots, the type of memory in which to store the data, the location and the retention policy. Also, replication to a remote location or a cloud service provider can be defined. Once an SLA is created, this can be connected with any application or virtual machine to capture the data accordingly.

The prerequisite for the generation of virtual copies is the creation of a single physical “golden” image or a “master copy” of the production data. From this an indefinite number of virtual copies can be made available instantly for all day-to-day use cases such as backup, test and development, and analytics, without affecting the production environment. The “golden copy” can also be mirrored to an outsourced location for disaster recovery.

Data virtualisation on the rise

An increasing level of virtualisation in the data centre is clearly noticeable. Data virtualisation represents the next consequent step after server, compute and network virtualisation. Virtualised infrastructures are easier to manage, more energy and cost-effective, because the model is demand-oriented compared to traditional environments. It matches today’s reality where an increasing amount of challenges in the data centre need to be managed with fewer resources.

The proven efficiency gains from server, client and network virtualisation can now be extended to data protection and management. This comes with less bandwidth requirements and instant restore possibilities. With a recent expansion for VMware and Oracle integration, Actifio leads this data virtualisation trend. The platform accelerates data management, reduces the complexity in data centres and distributed environments as well as enables access to the cloud.

By combining virtualisation with smart data management, companies can benefit from greater efficiency, flexibility and performance – and save money.

Public Sector Data Migration Hassles | @CloudExpo #Cloud

With careful planning and the right technology, Federal, State and Local Government IT Leaders can overcome fears of data migrations, breaking free from archaic procedures to lead the pack .
Jurassic World, the latest installment in the Jurassic Park film series, opened this week – and there’s a lot of hype surrounding the premiere as fans immerse themselves in a world of Mesozoic Era-inspired fantasy. While the creatures that make the theme park their home are strikingly realistic, their real-life counterparts became extinct millennia ago. Many believe that the once mighty dinosaur population fell in large part because it failed to evolve with the changing world around it. Public sector institutions face a similar plight today, especially as technology advancements demand they constantly evolve in order to keep up.

read more

How to accelerate government IT and hybrid cloud with a DevOps boost

(c)iStock.com/gong hangxu

The rapid adoption of digital business transformation processes and the ongoing deployment of open hybrid cloud platforms are enabling the achievement of software development bold goals. But when management consultants and industry analysts talk about how IT innovation is changing many organisations, government leadership of this key trend typically isn’t top of mind.

That said, a new market study by MeriTalk reveals that approximately two-thirds of American federal government IT leaders say DevOps adoption will help agencies shift into the cloud computing fast lane.

Agile methodologies, continuous integration and continuous delivery are improving IT collaboration and migration speed. But according to the findings, help is required – with 66% of leaders saying that their agency needs to move IT services to the cloud faster to meet their mission and constituent needs.

DevOps is a software development and IT management method that brings software engineering, quality assurance, and IT operations together as an integrated team to collaboratively manage the full application lifecycle. The MeriTalk study examined the cultural and structural barriers to cloud adoption, and potential positive impact from DevOps practices within a government environment.

“We’ve heard a lot about cloud barriers, and we’ve all seen the lackluster GAO cloud spending data,” said Steve O’Keeffe, founder of MeriTalk. “This study highlights a viable path forward. DevOps can help agencies change lanes and shift from inefficient silos to a dynamic, collaborative environment. It’s about people and how they work together, as well as the technology they use.”

Perceived benefits of DevOps methodologies

There’s real upside potential for DevOps adoption in government — 57 percent of survey respondents believe DevOps can help agencies succeed in the cloud. Sixty-three percent say DevOps will speed software application delivery and migration.

Furthermore, 68% see DevOps as a viable path to improve collaboration between IT development, security, and operations teams. Federal IT leaders also anticipate faster application testing (62 percent) with a proven DevOps approach.

The online survey of over 150 U.S. Federal IT managers also found that they believe increasing their cloud adoption pace will boost innovation (70 percent) – refresh existing applications and deploy new ones faster (69 percent); and provide more available, reliable, and secure operations (62 percent).

Barriers to cloud computing service deployment

While security and budget concerns remain top of mind within the leadership of government agencies, organization structure and cultural issues continue to slow the progress toward cloud computing service adoption.

Forty-two percent of IT leaders cite infrastructure complexity as a top challenge to adopting cloud, followed by fear of change (40 percent), inflexible practices (40 percent), and lack of a clear strategy (35 percent).

Some agencies are gaining momentum, but many are experiencing difficulties with the practical execution – since the introduction of cloud, just 44 percent of U.S. Federal agencies have made the required process or policy changes, 30 percent cultural changes, and 28 percent organisational changes.

Moreover, the study uncovered that these same Federal agencies aren’t properly equipped – only 12 percent believe that their IT department has all of the tools they need to transition to the cloud.

Four out of five IT managers (78 percent) believe their IT department needs to improve collaboration to enable a more streamlined move to the cloud. But, only 10 percent of Federal IT managers said that their software developers and administrators are highly collaborative.

Outlook for meaningful and substantive progress

Among the IT managers that understand DevOps benefits, just five percent say their agency has fully deployed a DevOps model. However, 60 percent do see DevOps in their future. Thirty-two percent of government IT managers that are already familiar with DevOps had also adopted the model or planned to do so within the next twelve months.

So, what are the perceived next steps? In order to successfully implement a DevOps model, government IT managers say agencies should train their current personnel (55 percent); establish a new vision for the future (41 percent); and create an incentive for a much-needed change in organisation culture (40 percent).

Jaguar Land Rover Applies Cloud Computing to Vehicle Technology

Jaguar Land Rover is currently developing technology that will utilize cloud computing to push data from vehicles to not only other connected vehicles but municipal authorities. This technology takes its MagneRide platform a step forward. In-vehicle sensors record the location and severity of road hazards such as pot holes or manhole covers.  This data is then pushed from the vehicle from which it was obtained onto a cloud computing platform where it is then available to other connected cars within the system as well as municipal authorities.

This information will help other connected drivers avoid the same hazards as well as giving local repair authorities vital information as to which areas of roads need priority maintenance.

jaguar-land-rover-connected-car-tech-540x334

 

Mike Bell, global connected car director for Jaguar Land Rover, said these developments will “allow the vehicle to profile the road surface under the wheels and identify potholes, raised manholes and broken drain covers. By monitoring the motion of the vehicle and changes in the height of the suspension, the car is able to continuously adjust the vehicle’s suspension characteristics, giving passengers a more comfortable ride over uneven and damaged road surfaces.”

While communication with street authorities is still being designed,  MagneRide technology is currently available in both the Range Rover Evoque and Discovery Sport vehicles.

This technology exemplifies a practical application of cloud technology, which is often viewed on a large scale perspective without practical use being considered.   This technology will not only help drivers as they drive through more hazardous roads, but will help repair the roads as well.

Bell also noted that this technology could also be another step toward driver-less cars, which Google has seen great success with.

The post Jaguar Land Rover Applies Cloud Computing to Vehicle Technology appeared first on Cloud News Daily.

IBM, Revive Vending set up London Café that brews analytics-based insights

Honest Café is an unmanned coffee shop powered by Watson analytics

Honest Café is an unmanned coffee shop powered by Watson analytics

IBM is working with Revive Vending on London-based cafés that use Watson analytics to help improve customer service and form a better understanding of its customers.

The ‘Honest Café’ – there are three in London, with four more planned – are all unmanned; instead, the company deploys high-end vending machines at each location, which all serve a mix of health-conscious snacks, juices, food and beverages.

The company is using Watson analytics to compensate for the lack of wait staff, with the cognitive compute platform being deployed to troll through sales data in a bid to unearth buying patterns and improve its marketing effectiveness.

“Because Honest is unmanned, it’s tough going observing what our customers are saying and doing in our cafes,” said Mark Summerill, head of product development at Honest Café. “We don’t know what our customers are into or what they look like. And as a start-up it’s crucial we know what sells and what areas we should push into.”

“We lacked an effective way of analyzing the data,” Summerill said. “We don’t have dedicated people on this and the data is sometimes hard pull to together to form a picture.”

“Watson Analytics could help us make sure we offer the right customers the right drink, possibly their favorite drink,” he said.

It can leverage the buying patterns by launching promotional offers on various goods to help improve sales, and it can also correlate the data with social media information to better inform its understanding of Honest Café customers.

“We identified that people who buy as a social experience have different reasons than those who dash in and out grabbing just the one drink,” Summerill said. “They also have a different payment method, the time of the day differs and the day of the week. Knowing this, we can now correctly time promotions and give them an offer or introduce a product that is actually relevant to them.”

“Having direct access to Twitter data for insights into what our buyers are talking about is going to help augment our view and understanding of our customers even further,” he added.

Eagle Eye Networks CEO Dean Drako acquires cloud access firm for $50m

Eagle Eye's CEO and former Barracuda Networks president is buying a cloud access and control company for $50m

Eagle Eye’s CEO and former Barracuda Networks president is buying a cloud access and control company for $50m

Dean Drako, president and chief executive of Eagle Eye Networks and former Barracuda Networks president has wholly acquired Brivo, a cloud access control firm, for $50m.

Brivo said its cloud-based access control system, a centralised management and security system for video surveillance cameras, currently services over 6 million users and 100,000 access points.

The acquisition will give Eagle Eye, a specialist in cloud-based video surveillance technology, a flexible access control tool to couple with its current offerings, Drako said.

“My goal was to acquire the physical security industry’s best access control system,” Drako explained.

“Brivo’s true cloud architecture and open API approach put it a generation ahead of other access control systems. Cloud solutions provide exceptional benefits and Brivo is clearly the market and technology leader. Brivo is also committed to strong, long-standing relationships with its channel partners, which I believe is the best strategy for delivering extremely high customer satisfaction.”

Though Eagle Eye will remain autonomous from Brivo, Drako will serve as the company’s chairman; Steve Van Till, Brivo’s president and chief executive, will continue serving in this capacity.

He said Eagle Eye will work to integrate Brivo’s flagship solution, Brivo OnAir, with its cloud security camera system, which will help deliver video verification and natively viewable and searchable video.

“We are extremely excited that Dean Drako has acquired Brivo and is serving as chairman. In addition to Dean’s experience founding and leading Barracuda Networks to be a multi-billion dollar company, he has grown his latest company, Eagle Eye Networks, to be the technology leader in cloud video surveillance,” Van Till said.

“We both share the vision of delivering the tremendous advantages of cloud-based systems to our customers,” he added.

Lessons from the Holborn fire: how disaster recovery as a service helps with business continuity

Disaster recovery is creeping up on the priority list for enterprises

Disaster recovery is creeping up on the priority list for enterprises

The recent fire in Holborn highlighted an important lesson in business continuity and disaster recovery (BC/DR) planning: when a prompt evacuation is necessary ‒ whether because of a fire, flood or other disaster ‒ you need to be able to relocate operations without advance notice.

The fire, which was caused by a ruptured gas main, led to the evacuation of 5,000 people from nearby buildings, and nearly 2,000 customers experienced power outages. Some people lost Internet and mobile connectivity as well.

While firefighters worked to stifle the flames, restaurants and theatres were forced to turn away patrons and cancel performances, with no way to preserve their revenue streams. The numerous legal and financial firms in the area, at least, had the option to relocate their business operations. Some did, relying on cloud-based services to resume their operations remotely. But those who depended on physical resources on-site were, like the restaurants and theatres, forced to bide their time while the fire was extinguished.

These organisations’ disparate experiences reveals the increasing role of cloud-based solutions ‒ particularly disaster recovery as a service (DRaaS) solutions ‒ in BC/DR strategies.

The benefits of DRaaS

Today, an increasing number of businesses are turning to the cloud for disaster recovery. The DRaaS market is expected to experience a compounded annual growth rate of 55.2 per cent from 2013 to 2018, according to global research company MarketsandMarkets.

The appeal of DRaaS solutions is that they provide the ability to recover key IT systems and data quickly, which is crucial to meeting your customers’ expectations for high availability. To meet these demands within the context of a realistic recovery time frame, you should establish two recovery time objectives (RTOs): one for operational issues that are specific to your individual environment (e.g., a server outage) and another for regional disasters (e.g., a fire). RTOs for operational issues are typically the most aggressive (0-4 hours). You have a bit more leeway when dealing with disasters affecting your facility, but RTOs should ideally remain under 24 hours.

DRaaS solutions’ centralised management capabilities allow the provider to assist with restoring not only data but your entire IT environment, including applications, operating systems and systems configurations. Typically systems can be restored to physical hardware, virtual machines or another cloud environment. This service enables faster recovery times and eases the burden on your in-house IT staff by eliminating the need to reconfigure your servers, PCs and other hardware when restoring data and applications. In addition, it allows your employees to resume operations quickly, since you can access the environment from anywhere with a suitable Internet connection.

Scalability is another key benefit of DRaaS solutions. According to a survey by 451 Research, the amount of data storage professionals manage has grown from 215 TB in 2012 to 285 TB in 2014. To accommodate this storage growth, companies storing backups in physical servers have to purchase and configure additional servers. Unfortunately, increasing storage capacity can be hindered by companies’ shrinking storage budgets and, in some cases, lack of available rack space.

DRaaS addresses this issue by allowing you to scale your storage space as needed. For some businesses, the solution is more cost-effective than dedicated on-premise data centres or colocation solutions, because cloud providers typically charge only for the capacity used. Redundant data elimination and compression maximise storage space and further minimise cost.

When data needs to be maintained on-site

Standard DRaaS delivery models are able to help many businesses meet their BC/DR goals, but what if your organisation needs to keep data or applications on-site? Perhaps you have rigorous RTOs for specific data sets, and meeting those recovery time frames requires an on-premise backup solution. Or maybe you have unique applications that are difficult to run in a mixture of physical and virtual environments. In these cases, your business can leverage a hybrid DRaaS strategy which allows you to store critical data in an on-site appliance, offloading data to the cloud as needed.

You might be wondering, though, what happens to the data stored in an appliance in the event that you have to evacuate your facility. The answer depends on the type of service the vendor provides for the appliance. If you’re unable to access the appliance, recovering the data would require you to either access an alternate backup stored at an off-site location or wait until you regain access to your facility, assuming it’s still intact. For this reason, it’s important to carefully evaluate potential hybrid-infrastructure DRaaS providers.

DRaaS as part of a comprehensive BC/DR strategy

In order for DRaaS to be most effective for remote recovery, the solution must be part of a comprehensive BC/DR strategy. After all, what good is restored data if employees don’t have the rest of the tools and information they need to do their jobs? These additional resources could include the following:

•         Alternate workspace arrangements

•         Provisions for backup Internet connectivity

•         Remote network access solutions

•         Guidelines for using personal devices

•         Backup telephony solution

The Holborn fire was finally extinguished 36 hours after it erupted, but not before landing a blow on the local economy to the tune of £40 million. Businesses using cloud services as part of a larger business continuity strategy, however, were able to maintain continuity of operations and minimise their lost revenue. With the right resources in place, evacuating your building doesn’t have to mean abandoning your business.

By Matt Kingswood, head of managed services, IT Specialists (ITS)