How an IT Manager Leverages Parallels Desktop for Mac Business Edition to Reduce Time and Hardware Costs While Running Windows and Linux on a Mac

Guest blog post by user Andrew Derse Virtualization offers unparalleled cost reduction, productivity, and time savings for IT managers and system admins. With the capability to run Windows and Linux seamlessly on Mac, Parallels Desktop for Mac Business Edition can expedite usability and lessen headaches when managing a company’s environment. Since 1998, industry professionals have […]

The post How an IT Manager Leverages Parallels Desktop for Mac Business Edition to Reduce Time and Hardware Costs While Running Windows and Linux on a Mac appeared first on Parallels Blog.

[session] @SuJamthe on #MachineLearning | @ThingsExpo #BigData #IoT #ML

Intelligent machines are here. Robots, self-driving cars, drones, bots and many IoT devices are becoming smarter with Machine Learning.
In her session at @ThingsExpo, Sudha Jamthe, CEO of IoTDisruptions.com, will discuss the next wave of business disruption at the junction of IoT and AI, impacting many industries and set to change our lives, work and world as we know it.

read more

[session] Staying Secure and Organized in the Cloud | @CloudExpo #Cloud #Security #Compliance

As companies adopt the cloud-to-streamline workflow, deployment hasn’t been very seamless because of IT concerns surrounding security risks. The cloud offers many benefits, but protecting and securing information can be tricky across multiple cloud providers and remains IT’s overall responsibility.
In his session at 19th Cloud Expo, Simon Bain, CEO of SearchYourCloud, will address security compliance issues associated with cloud applications and how document-level encryption is critical for supplementing existing enterprise security solutions. He will draw from case studies, outline best practices for businesses and demo how data can be transported and stored to and from the cloud already encrypted and securely accessed no matter where it’s stored.

read more

[session] Optimizing Ops | @DevOpsSummit @RedHatNews @GHaff #DevOps

operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past.
In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are developed using DevOps approaches, and are deployed on software-defined, and often containerized, infrastructures – and where operations itself is increasingly another “as a service” capability from the perspective of developers.

read more

[session] #BigData in the Cloud | @CloudExpo @SoftNets #IoT #Virtualization

Enterprises have been using both Big Data and virtualization for years. Until recently, however, most enterprises have not combined the two. Big Data’s demands for higher levels of performance, the ability to control quality-of-service (QoS), and the ability to adhere to SLAs have kept it on bare metal, apart from the modern data center cloud. With recent technology innovations, we’ve seen the advantages of bare metal erode to such a degree that the enhanced flexibility and reduced costs that clouds offer tip the balance in favor of virtualization for Big Data deployments. And for organizations concerned about performance, virtualized environments can address that issue or in some cases even perform better than bare metal.

read more

More than half of enterprises choosing cloud by default, survey says

(c)iStock.com/RapidEye

A new report from enterprise cloud services provider ServiceNow argues that a tipping point has finally been reached with more than half of enterprises surveyed would use cloud as the default choice for business application rollouts.

52% of the 1,850 senior managers polled across four continents said they now prefer cloud as a platform compared with on-premise data centres, while more than three quarters (77%) of respondents expect to shift to a cloud-first model in the next two years.

The primary reason for this shift, the report argues, is the emergence of DevOps. 94% of respondents said they were involved with the DevOps movement in some way, while three quarters (76%) admitted it was a major factor driving the move to cloud-first operations. 88% of those polled said that cloud could replace a formal IT department “at least some of the time.”

Just under three quarters of those polled (72%) said IT’s relevancy is on the rise following a cloud-first reorganisation, while 68% said IT will be ‘completely essential’ in the future. Ultimately, ServiceNow argues, the role of IT will have to shift from ‘builder’ to ‘broker’ to succeed in the cloud-first environment.

“Amidst the cloud-first shift, there are ominous signs for IT if there’s no change,” said Dave Wright, chief strategy officer at ServiceNow. “We believe this presents a real opportunity for those visionary IT organisations who can become strategic partners to the enterprise during this shift to cloud-first.”

Not all the findings were rosy, however, with almost nine in 10 (89%) companies polled who had shifted to a cloud-first model saying their current IT staff lacked the required skill sets to be successful. Achieving 360 degree visibility for the business (64%) was named as the biggest priority going forward.

IBM Buys a Hybrid Cloud Company Called Sanovi Technologies

IBM has acquired a company called Sanovi Technologies to give a boost to its hybrid cloud offerings. According to a company release, this acquisition will enhance the resiliency capabilities of IBM’s cloud tools, so it can provide more advanced analytics for hybrid environments. The financial details of the transaction were not disclosed.

Sanovi Technologies is a company based in Bangalore, India. It was founded in 2003 by Chandra Sekha Pulamarasetti, Lakshman Narayanaswamy, and Raja Vonna, and has operations in the United States, Middle East, and India. This company’s Application Defined Continuity (ADC) technology is used to spread the workload across different physical, virtual, and cloud infrastructures. During a disaster, this tool will spread the workload, thereby making recovery easier, and at the same, will mitigate the impact of the disaster. IBM believes this capability to disburse workload will give a big fillip to its own Disaster Recovery Management (DRM) solutions. In addition, ADC can help to simply workflows,  automate disaster recovery, and reduce operational costs and time.

Sanovi Technologies also offers a cloud migration manager platform to help businesses and enterprises make the move towards public cloud. This enterprise software platform provide lifecycle automation, along with workload migration design. This manager is also built on ADC to ensure business continuity during migration.

Both the ADC technology as well as the migration manager tool are relevant today, as more companies are migrating to the cloud. In this perspective, IBM can get a big boost with this acquisition.

This acquisition is expected to be completed by the end of 2016, after which, it will be integrated into IBM Global Technology Services unit. Eventually, IBM plans to leverage Watson’s capabilities, and expand it to Sanovi’s DRM capabilities, so that end-clients can have a proactive business continuity plan. In fact, IBM plans to help businesses transition from a business continuity plan to a proactive resilience program, so that potential failures can be identified and fixed, even before they occur. If IBM’s plan falls in place, it could signal the beginning of a new approach towards disaster recovery.

This move can be a significant one, for many reasons. Firstly, climate change and unpredictable weather patterns have increased the chances for wilder weather, that in turn, can impact businesses profoundly. To tackle such situations, a proactive approach and a sound DRM that will distribute workloads to regions that are not affected by the disaster can make a huge difference for the business operations of companies.  Secondly, it can give IBM an edge over that of its competitors in the hybrid cloud, as it can combine DRM with Watson’s capabilities to provide a fool-proof DRM service.

Thirdly, this acquisition can give IBM a firm grip in the growing Indian market. Since hybrid clouds are the preferred choice for enterprises in India, this acquisition is sure to provide these clients with greater security, efficiency, and productivity. It can also help IBM to get a larger market share in one of the top growing economies in the world.

According to a press release from IBM, Sanovi’s DRM service will be offered as a standalone product on a monthly or yearly subscription basis.

The post IBM Buys a Hybrid Cloud Company Called Sanovi Technologies appeared first on Cloud News Daily.

Overcoming the Achilles heel of flash storage: Making flash last longer

(c)iStock.com/loops7

More corporations than ever are taking advantage of flash storage, and you use it every day – be it in your computer, phone, wearable device, or in the USB drive you back up your data on. It’s faster, more durable, and more reliable than a spinning disk, but it still does have one weakness: it deteriorates over time.

In theory, all of the flash’s advantages eventually go out the window as its lifespan wanes, and since it’s more expensive than a spinning disk, experts are trying to find new ways to make it live longer.

How flash storage deteriorates

To put it simply, flash storage is made up of a group of cells. Each of these cells are individually written with data, and then hold this data for you to reference later. Your pictures, documents, and videos are all broken up into several tiny pieces and individually written into these cells, similar to writing something on paper with a pencil.

Like pencil and paper, when you erase what the cell held – removing a photo or document – and replace it with something else – a new photo or document – you’re forcing the cell to rewrite itself, like erasing pencil lead from paper. If you use the eraser too much, the paper will thin out and eventually rip. That is the Achilles Heel of flash storage.

How experts are making flash last longer

1. Machine learning adjusting the load to make flash cells last longer: There are high points and low points in everything’s lifespan, and flash is no different. The kind of load it can take early in its life is going to be different from later in life, with a few ups and downs in the middle as well.

Machine learning is being developed and tested by experts to automatically identify when these high and low points are, and purposefully lighten the load or distribute the strain rewriting puts on flash storage cells so that they’re not overly taxed, which wears them out sooner. This process is so specific, so fast, and so difficult to detect that it’s impossible for human beings to do it manually – which is exactly what the computer itself would be set to the task.

2. Removing static data from wear levelling flash storage: There is static data and then active data, both which live up to their names.

Static data, when written on a cell, only needs to be written once – so those cells can take a break; their job is done. However, active data, such as a program is being written and rewritten often, makes those cells work constantly. To keep any one cell from being worn out too fast, wear levelling is a built-in flash feature that spreads that weight out over multiple cells.

The problem is that static data is makes it impossible to spread that wear evenly – it’s not active, so it can’t be moved. This causes the active data cells to wear out considerably faster, and when they wear out, the entire thing wears out, dramatically reducing the lifespan of the drive. That’s why experts have created wear levelling and non-wear levelling flash storage, so active data and static data can each have their own storage to avoid overburdening the flash.

3. Digital signal processing to make bit errors more readable: When the cells are written and rewritten too often, they can misunderstand the finer details of what they’re supposed to write, which essentially makes them mistranslate the data. This is called a bit error, and the cells have to work harder to decipher what the data really means, putting a strain on them and their lifespan.

However, experts have developed storage systems that use digital signal processing (DSP) that takes half that burden on itself, splitting up the strain so that bit errors can be avoided or handled faster, putting less strain on the flash storage itself. However, this is a temporary solution that extends flash life only to a certain degree.

Conclusion

Flash storage is like all things; it’s mortal. It’s impossible, for now, to create flash which doesn’t wear out, but in the meantime, experts are tailoring certain storage to help spread out the work and bear the additional weight, and even teaching the flash storage itself how to better allocate its efforts in converged infrastructure to keep it fresh. The longer the storage lives, the more cost effective it is, which inherently drives more funds to improving its lifespan.

Advanced cloud security: Standards and automation in a multi-vendor world

(c)iStock.com/maxkabakov

Enterprise IT has long struggled to develop common standards for the security of cloud deployments. With multiple cloud vendors, fast-moving product teams, and a changing security landscape, it is perhaps no wonder that enterprises are left asking:

  • What is the right cloud security standard?
  • What level of security is “good enough”?
  • And most importantly — how do we apply these standards in a consistent way to existing and new cloud environments?

In July 2016, the Ponemon Institute published The 2016 Global Cloud Data Security Study, an independent survey of 3,400 technology and security professionals. More than half of the respondents did not have measures for complying with privacy and security requirements in the cloud. That is clearly a problem.

We sat down with Dan Rosenbloom, the lead AWS architect at Logicworks, to talk about his own struggle with standardization and how he enforces common configurations across 5,000+ VMs on AWS.

Why do you think central IT and GRC teams have struggled to keep up with the pace of cloud adoption?

Every company is different, but this struggle usually happens when the business goal of getting to the cloud and meeting deadlines trumps long-term goals, like setting up standards for consistency and manageability. In fast-moving companies, every team builds a cloud environment on a different platform following their own definition of cloud security best practices; security teams are often pressured to complete reviews of these custom, unfamiliar environments in short time frames. Both developers and security professionals end up unhappy.

Many central IT departments are in the process of reinventing themselves. You do not become a service-oriented IT team overnight — and when you add cloud adoption into the mix, you put further strain on already stretched processes and resources.

Have you seen a company go through this process well? Or is this struggle inevitable?

To some degree, every company struggles. But the ones that struggle less do some or all of the following:

  • They choose (and stick to) a limited set of cloud security guidelines for all projects on all platforms (i.e. NIST, IT consortiums like Cloud Council, and industry-specific associations like the Legal Cloud Computing Association)
  • Central IT maintains a strong role in cloud purchasing and management
  • Central IT commits to an automation-driven approach to cloud management, and developers modular templates and scripts, not one-off environments

What standard do most organizations apply in cloud security?

From a strategic perspective, the issue boils down to who can access your data, how you control that access and how you protect that data while it is being used, stored, transmitted and shared. At Logicworks, we build every cloud to at least PCI DSS standards, which is the standard developed by credit card companies to protect consumer financial information. We choose PCI because a) it is specific and b) we believe it represents a high standard of excellence. Even clients with no PCI DSS compliance requirement meet at least these standards as a baseline.

If your company has an existing annual infrastructure audit processes and standards, and a supplementary standard like Cloud Council can help orient your GRC team for cloud-specific technologies.

How does central IT enforce this standard?

One of the benefits of cloud technology is that you can change your infrastructure easily — but If your environment is complex, change management and governance quickly become a nightmare. Automation is the key.

One of Logicworks’ main functions as a managed service provider is to establish a resource’s ideal state and ensure that even as the system evolves, it never strays from that state. We use four key processes to do this:

  • Infrastructure automation: Infrastructure is structured and built into templates, where it can be versioned and easily replicated for future environments. Tools: AWS CloudFormation, Git
  • Configuration management: Configuration management scripts and monitoring tools catch anomalies and proactively correct failed/misconfigured resources. This means that our instances replace themselves in case of failure, errors are corrected, and ideal state is always maintained. Tools: Puppet, Chef, Jenkins, AWS Autoscaling, Docker, AWS Lambda, AWS Config Rules
  • Deployment automation: Code deployment processes are integrated with cloud-native tools, improving deployment velocity and reducing manual effort (and error). Tools: AWS CodeDeploy, AWS Lambda, Jenkins
  • Compliance monitoring: Systems are continuously monitored for adherence to security standards and potential vulnerabilities, and all changes are logged for auditing. Tools: Git, AWS Config Rules, AWS CloudTrail, AWS Config, AWS Inspector, Sumo Logic, Congiruation management (Puppet, Chef, Ansible)

It is easiest to see how this works together in an example. Let’s say central IT wants to build a standard, PCI DSS compliant environment that product teams can modify and use for their own individual projects. Essentially, they want to “build in” a baseline of security and availability standards into a repeatable template.

First, they would write or create core resources (compute, storage, network) in a framework like AWS CloudFormation, with some basic rules and standards for the infrastructure level. Then the OS would get configured by a configuration management tool like Puppet or Chef, which would perform tasks like requiring multi-factor authentication and installing log shipping and monitoring software, etc. Finally, those resources receive the latest version of code and are deployed into dev/test/production.

Obviously I am oversimplifying this process, but we have found that this is an extremely effective way to ensure that:

  • Product teams have access to an approved, “best practices” template, which ensures they’re not launching completely insecure resources,
  • You have a central source of truth and can manage multiple systems without having to log in and do the same change on hundreds of systems one by one
  • Security teams have a record of all changes made to the environment, because engineers apply changes through the configuration management tool (rather than one-off)
  • Standards are continually enforced, without human intervention.

How do you bring engineers on board with an automation system?

I have been in IT for about 15 years, and I come from a very “traditional” systems engineering background. I always tried to follow the old school IT rule that you should never do the same thing twice — and the cloud just gives me many new opportunities to put this into practice.

I got introduced to configuration management a few years ago and was obsessed from day 1. Most engineers like scripting things if there is an annoying problem, and cloud automation takes care of some of the most annoying parts of managing infrastructure. You still get to do all the cool stuff without the repetitive, mindless work or constant firefighting. Once your engineers get a taste of this new world, they will never want to go back.

Any advice for IT teams that are trying to implement better controls?

The automation framework I just described — infrastructure automation, deployment automation, and configuration management — is complex and can take months or years to build and master. If you work with a Managed Service Provider that already has this framework in place, you can achieve this operational maturity in months; but if you are starting on your own, do not worry about getting everything done at once.

Start by selecting a cloud security standard or modifying your existing standard for the cloud. Then build these standards into a configuration management tool. Even if you do not build a fully automated framework, implementing configuration management will be a huge benefit for both your security team and your developers.

The post Advanced Cloud Security: Q&A with a Sr. DevOps Engineer appeared first on Logicworks Gathering Clouds.

How to protect your Mac from risks like ransomware and shadow IT

It is more important than ever to safeguard your digital assets from increasing risks and threats. Have you heard already of Ransomware and Shadow IT?  Today, I would like to talk about these two serious risks and give you some tips to protect yourself from them. Let`s start with ransomeware which is one of the […]

The post How to protect your Mac from risks like ransomware and shadow IT appeared first on Parallels Blog.