Government Cloud Achilles Heel: The Network | @CloudExpo #Cloud #BigData #Security

Cloud computing is rewriting the books on information technology (IT) but inter-cloud networking remains a key operational issue. Layering inherently global cloud services on top of a globally fractured networking infrastructure just doesn’t work. Incompatibilities abound and enterprise users are forced to use “duct-tape and baling wire” to keep their global operations limping along. The continuing gulf between IT professionals and business managers only exacerbates this sad state of affairs. IT professionals, however, bear a more significant amount of blame for the current state because we are the ones responsible for providing the operational platform and enabling the new information delivery models that drive modern constituent services and commerce.

read more

[slides] Your Bimodal Digital Future | @CloudExpo @Interoute #Cloud #DigitalTransformation

Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business.
In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, outlined ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.

read more

Announcing @MangoSpring to Exhibit at @CloudExpo Silicon Valley | #IoT #Cloud #Security

SYS-CON Events announced today that MangoApps will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device.

read more

Tech News Recap for the Week of 7/11/2016

Were you busy this week? Here’s a tech news recap of articles you may have missed for the week of 7/11/2016!

Scammers are creating fake websites targeting Olympic fans, Australia was victim to more than 200,000 ransomware attacks over the past two months, and Oregon Health and Science University has agreed to pay federal authorities $2.7 million for two data breaches in 2013 that involved more than 7,000 patients. According to IDC, IaaS revenue will triple as enterprises adopt public cloud computing as a viable option to on-premises hardware. Amazon just bought a small startup called Cloud9 that specializes in software development tools. Microsoft wins an appeal over the U.S. government, Tesla is standing firm in the use of its self-driving Autopilot feature, and more top news from this week.

Follow us on Twitter to stay up-to-date on the latest news throughout the week!

Tech News Recap

 

Did you miss yesterday’s webinar about the current landscape of the hyper-converged market? Download here!

 

 

By Ben Stephenson, Emerging Media Specialist

 

 

Microsoft secures major court win with ramifications for global cloud data storage

(c)iStock.com/JasonDoiy

A federal court has ruled that the US government cannot force Microsoft to hand over data stored in other countries, overturning an original decision from a magistrate judge in 2014 and giving a shot in the arm for cloud security and data sovereignty.

The key asset in the case was data located in Microsoft’s data centre in Dublin, after the US Department of Justice had wanted to access an email server. The previous ruling argued that while Microsoft’s position over giving federal access to servers outside the US was ‘not inconsistent’ with statutory language, it was ‘undermined’ by the structure of the Stored Communications Act (SCA).

At the time of the original ruling, Microsoft deputy general counsel David Howard accepted the timeframes of having to go first through a magistrate judge, before a US district court judge and then eventually to the federal court of appeals. And as the 43-page ruling (viewable here) filed yesterday concludes: “Congress did not intend the SCA’s warrant provisions to apply extraterritorially.

“The SCA warrant in this case may not lawfully be used to compel Microsoft to product to the government the contents of a customer’s email account stored exclusively in Ireland,” it adds. “We therefore reverse the District Court’s denial of Microsoft’s motion to quash; we vacate its order holding Microsoft in civil contempt of court; and we remand this cause to the District Court with instructions to quash the warrant insofar as it demands user content stored outside of the United States.”

Understandably, Microsoft welcomed the decision. “Since the day we filed this case, we’ve underscored our belief that technology needs to advance, but timeless values need to endure,” wrote Brad Smith, Microsoft president and chief legal officer in a statement.

“Privacy and the proper rule of law stand among these timeless values,” Smith added. “We hear from customers around the world that they want the traditional privacy protections they’ve enjoyed for information stored on paper to remain in place as data moves to the cloud. Today’s decision helps ensure this result.”

According to the BBC, the US Department of Justice was “disappointed” by the decision. If the federal department appeals, the case could still yet go to the US Supreme Court. For Microsoft, however, the position was clear going forward. “Today’s decision means it is even more important for Congress and the executive branch to come together and modernise the law,” wrote Smith.

“We’re confident that the technology sector will continue to roll up its sleeves to work with people in government in a constructive way. We hope that today’s decision will bring an impetus for faster government action so that both privacy and law enforcement needs can advance in a manner that respects people’s rights and laws around the world.”

What you need to know about infrastructure as code – and why now

(c)iStock.com/da-kuk

The new generation of infrastructure management technologies promises to transform the way we manage IT infrastructure. But many organisations today aren’t seeing any dramatic differences, and some are finding that these tools only make life messier. As we’ll see, infrastructure as code is an approach that provides principles, practices, and patterns for using these technologies effectively.

Why infrastructure as code?

Virtualisation, cloud, containers, server automation, and software-defined networking should simplify IT operations work. It should take less time and effort to provision, configure, update, and maintain services. Problems should be quickly detected and resolved, and systems should all be consistently configured and up to date. IT staff should spend less time on routine drudgery, having time to rapidly make changes and improvements to help their organizations meet the ever-changing needs of the modern world.

But even with the latest and best new tools and platforms, IT operations teams still find that they can’t keep up with their daily workload. They don’t have the time to fix longstanding problems with their systems, much less revamp them to make the best use of new tools. In fact, cloud and automation often makes things worse. The ease of provisioning new infrastructure leads to an ever-growing portfolio of systems, and it takes an ever-increasing amount of time just to keep everything from collapsing.

Adopting cloud and automation tools immediately lowers barriers for making changes to infrastructure. But managing changes in a way that improves consistency and reliability doesn’t come out of the box with the software. It takes people to think through how they will use the tools and put in place the systems, processes, and habits to use them effectively.

Some IT organisations respond to this challenge by applying the same types of processes, structures, and governance that they used to manage infrastructure and software before cloud and automation became commonplace. But the principles that applied in a time when it took days or weeks to provision a new server struggle to cope now that it takes minutes or seconds.

Legacy change management processes are commonly ignored, bypassed, or overruled by people who need to get things done. Organizations that are more successful in enforcing these processes are increasingly seeing themselves outrun by more technically nimble competitors.

Legacy change management approaches struggle to cope with the pace of change offered by cloud and automation. But there is still a need to cope with the ever-growing, continuously changing landscape of systems created by cloud and automation tools. This is where infrastructure as code comes in.

What is infrastructure as code?

Infrastructure as code is an approach to infrastructure automation based on practices from software development. It emphasizes consistent, repeatable routines for provisioning and changing systems and their configuration. Changes are made to definitions and then rolled out to systems through unattended processes that include thorough validation.

The premise is that modern tooling can treat infrastructure as if it were software and data. This allows people to apply software development tools such as version control systems (VCS), automated testing libraries, and deployment orchestration to manage infrastructure. It also opens the door to exploit development practices such as test-driven development (TDD), continuous integration (CI), and continuous delivery (CD).

Infrastructure as code has been proven in the most demanding environments. For companies like Amazon, Netflix, Google, Facebook, and Etsy, IT systems are not just business critical; they are the business. There is no tolerance for downtime. Amazon’s systems handle hundreds of millions of dollars in transactions every day. So it’s no surprise that organizations like these are pioneering new practices for large scale, highly reliable IT infrastructure.

This book aims to explain how to take advantage of the cloud-era, infrastructure-as-code approaches to IT infrastructure management. This chapter explores the pitfalls that organisations often fall into when adopting the new generation of infrastructure technology. It describes the core principles and key practices of infrastructure as code that are used to avoid these pitfalls.

Goals of infrastructure as code

The types of outcomes that many teams and organisations look to achieve through infrastructure as code include:

  • IT infrastructure supports and enables change, rather than being an obstacle or a constraint.
  • Changes to the system are routine, without drama or stress for users or IT staff.
  • IT staff spends their time on valuable things that engage their abilities, not on routine, repetitive tasks.
  • Users are able to define, provision, and manage the resources they need, without needing IT staff to do it for them.
  • Teams are able to easily and quickly recover from failures, rather than assuming failure can be completely prevented.
  • Improvements are made continuously, rather than done through expensive and risky “big bang” projects.
  • Solutions to problems are proven through implementing, testing, and measuring them, rather than by discussing them in meetings and documents.

Extract taken from Infrastructure as Code (O’Reilly Publishing) by Kief Morris.

Read more: Kief Morris: On DevOps, containers, and empowering end to end services

Kief Morris: On DevOps, containers, and empowering end to end services

(c)iStock.com/franckreporter

The concept of infrastructure as code (IaC), where computing infrastructure is managed and configured by automation rather than through manual processing, has been around for a while. Mark Burgess’ work on CFEngine, an open source configuration management system dating back to 1993, laid the groundwork, while DevOps.com introduced its readers to the concept back in May 2014. But what does the landscape look like in 2016?

Kief Morris (left) is cloud practice lead at ThoughtWorks, and author of Infrastructure as Code (the opening extract you can exclusively read here). The book discusses the different approaches of IaC, examples of dynamic infrastructure platforms, going on to detail patterns for provisioning servers. The book has taken Morris two years to write, and in the introduction, he outlines the need for IT teams and developers to move away from the traditional fire-fighting culture to continuous improvement.

“I have been discovering, refining, and using the ideas of infrastructure as code shared by people in the DevOps movement for years,” Morris tells CloudTech. “In working with ThoughtWorks clients, I’ve found that although most people are using technologies and tools like cloud and automated configuration, many teams haven’t worked out how to fully take advantage of the tools. So I thought it would be useful to pull these ideas together into a book.”

Key to these ideas is changing working models while keeping things running smoothly. “Teams need to be empowered to own their services and applications from end to end,” explains Morris. “But teams that provide supporting services to other teams in their organisation need to engage closely with their users to make sure they’re building the right thing, while not making themselves a bottleneck for routine operations.”

Morris notes in the introduction to Infrastructure as Code that the movement is a ‘cornerstone’ of DevOps, and represents the ‘automation’ part of CAMS (culture, automation, measurement and sharing). DevOps as a whole is a skillset which research firms continually claim is vital to have – so what do organisations need to know about DevOps trends going forward?

“I believe a few shifts still need to happen,” says Morris. “One is a move from the split between ‘build’ and ‘run’, [but] this doesn’t mean every application team should build and run its own infrastructure.

“Cloud creates a clean model for teams to manage the way they use infrastructure provided by other teams, and other companies, and this empowers teams to have full ownership of their applications from concept to production,” he adds.

An increasingly important part of the conversation is through containerisation platforms, with Docker assuming the position of court favourite. A Diginomica article from July 2014 proclaimed that ‘virtualisation was dead…long live containerisation’. Naturally, containerisation is essentially just another flavour of virtualisation, but Morris notes its influence over the past couple of years. “Container references increased from a few paragraphs to a few sections, and eventually throughout the book,” he explains.

Can Docker become a platform that is fully enterprise-grade? “I believe Docker is ready for the enterprise, but not necessarily everything in the enterprise at this point,” says Morris. “Containerisation is a great model for defining the contract between applications and the systems they run on in a way that simplifies both, so it’s a natural mode for building new applications and services. But there is a mass of existing software which isn’t architected to be containerised – and we’re all still learning how to implement containerised platforms and applications.”

All in all however, with increased knowledge of IaC, how to deploy it, and how it relates to DevOps and containers, Morris hopes readers will have one takeaway when reading the book: “Use automated pipelines to test infrastructure changes so that you can make changes to your systems confidently.”

Read more: What you need to know about infrastructure as code – and why now

Data Breach Handling | @DevOpsSummit #DataCenter #DevOps #InfoSec

A data breach could happen to anyone. Data managed by your company is valuable to someone, no matter what the data is. Everything has a price tag on the dark web. It is especially true when it is customer data, such as personal and payment card details.
When your customers’ data turns up somewhere unexpected on the Internet, you may feel the world is collapsing around you. People start tweeting about the hack, angry customers phone in, and Brian Krebs publishes his first article. Your organization switches to an emergency mode to handle the situation. It is the time when your incident response team takes control to put the genie back in the bottle.

read more

Venafi Makes It Easy for DevOps to Run Secure | @DevOpsSummit @Venafi #DevOps #ContinuousTesting

Venafi has extended the power of its platform in an easy-to-use utility for DevOps teams available for immediate download. Now DevOps teams can eliminate the hassle of acquiring and installing TLS keys and certificates. Instead, customers can focus on speeding up continuous development and deployment, while security teams have complete visibility and can keep the DevOps environment secure and compliant to protect customer data. Extending the Venafi Trust Protection Platform requires only a single line of code and works out-of-the box with leading automation, orchestration, and containerization platforms including Puppet, Chef, Docker, Terraform, Saltstack, and Ancible – on premise and in the cloud.

read more

[slides] Build Operations into Cloud | @CloudExpo @BMCSoftware #Cloud #DigitalTransformation

Many private cloud projects were built to deliver self-service access to development and test resources. While those clouds delivered faster access to resources, they lacked visibility, control and security needed for production deployments.
In their session at 18th Cloud Expo, Steve Anderson, Product Manager at BMC Software, and Rick Lefort, Principal Technical Marketing Consultant at BMC Software, discussed how a cloud designed for production operations not only helps accelerate developer innovation, it also delivers the control that IT Operations needs to run a production cloud without getting in the way.

read more