Microsoft launches Azure confidential computing to protect data encrypted in use

Microsoft has announced the launch of ‘confidential computing’ in Azure, claiming to be the first public cloud provider to offer encryption of data while in use.

The project, for which a variety of Microsoft teams have been working for four years, is similar in scope to the Coco Framework, Redmond’s confidential computing blockchain initiative.

“Despite advanced cybersecurity controls and mitigations, some customers are reluctant to move their most sensitive data to the cloud for fear of attacks against their data when it is in-use,” Mark Russinovich, Microsoft Azure CTO wrote in a company blog post. “With confidential computing, they can move the data to Azure knowing that it is safe not only at rest, but also in use from [various] threats.”

The threats Russinovich outlined included classic scenarios; malicious insiders with administrative privileges, as well as hackers and malware exploiting bugs in operating systems. The platform Microsoft is building enables developers to take advantage of different trusted execution environments (TEE) – which ensure there is no way to view data from the outside – without having to change their code.

“We see broad application of Azure confidential computing across many industries including finance, healthcare, AI and beyond,” Russinovich wrote. “In finance, for example, personal portfolio data and wealth management strategies would no longer be visible outside of a TEE. Healthcare organisations can collaborate by sharing their private patient data, like genomic sequences, to gain deeper insights from machine learning across multiple data sets without risk of data being leaked to other organisations.

“In oil and gas, and IoT scenarios, sensitive seismic data that represents the core intellectual property of a corporation can be moved to the cloud for processing, but with the protections of encrypted-in-use technology,” Russinovich added.

You can find out more here.

Why IT needs to be an enabler for business to succeed

Delivering an IT service to a business is difficult. It needs to support and enable the success of the business. In my experience, every IT department wants to provide the best service it possibly can. Let’s face it, we all do – if the business is a success everyone wins.

There are a huge number of moving parts to the IT infrastructure of a company, with a great many complex interactions. These typically happen between teams that manage specific sections of this infrastructure. Unfortunately, there is often a disconnect between what the business needs and what the IT delivers. This is a result of the many different issues facing organisations.

Disconnect and miscommunication

Firstly there are communication challenges. There can be a lack of understanding of the business priorities by the IT departments. This is usually the result of a lack of alignment between business strategy and KPIs versus those of the IT department. Conversely there is often a lack of understanding by the business when it comes the problems facing the IT teams and a view that IT ‘should just work’.

The simple fact is that IT problems are often viewed as being difficult to translate into layman’s language but that does not need to be the case. The issues can be translated, perhaps only at a high level, but translated all the same. The detail itself may be complex but does everyone really need to know or understand the nitty gritty?

To use an old much used analogy – do you really need or even want to know the details behind why your car has broken down, or would you just like it to be fixed and know when you’ll be able to get on the road again? Trust is a big component here. Would you take your expensive car to a garage that hasn’t previously delivered on promises or would you try a different one?

It’s the same when it comes to IT. When an incident occurs that impacts the business the IT department often comes under extreme pressure to fix the problem from the business but also from themselves. The first step to fix the problem is to identify the cause. It could be obvious but frequently it can take days or even weeks to find, depending on the complexity and the visibility and the expertise the IT staff have.  The delays in resolution are commonplace for ‘grey’ issues. A grey issue is a malfunction in some unidentified part if the IT estate that is not causing an outage but is causing poor performance and user frustration.

Once the problem is identified the technical experts and management are required to come up with a remediation plan. While the problem must be fixed it needs to be done in such a way that doesn’t impact any other critical systems, and that ensure that no new holes in the system are created. The service needs to be restored as closely as possible to its previous state, the fix being planned and documented. It must also have the engagement of staff at a senior level.

There are always technical challenges in companies that dog the IT department. There is the familiar technical debt, where an ageing or out of date infrastructure is cajoled daily into performing above its capabilities. Staffing levels and lack of key skills can be a problem for all departments and IT is no exception. A lack of monitoring is another issue, and when there is monitoring, is it managed and acted upon?

Then there are the financial challenges around having a fully functioning state-of-the-art IT department that seamlessly aids the front end of the business. Keeping your infrastructure up to date is expensive and needs constant review, thanks to the rate of change in the industry. Also, skilled, qualified staff are in high demand and expensive. Loyalty and competence carries a price tag.

Business often thinks of IT as a ‘sunk’ cost, a bit like facilities management for example. This comes from the idea that the IT department ‘keep the lights on’ in just the same way. But IT is also a vital part of a company’s bottom line. IT is present in every part of a business in ways the organisation itself often does not fully understand. In most organisations today IT is a core business function: in other words, without it the business would fail.

Misalignment between IT and business strategy means the IT department can’t allocate spend and effort to what the business needs. It is a frustrating experience for the IT department when they are brought to task for poor performance but offered no guidance on how they can best help the business succeed.

Pointers for success

For IT to help business succeed there are some key steps that can and should be taken. Firstly mapping business functions to IT components and identifying critical paths for application data and networking. This will clarify what goes where and identify the location of any weaknesses. This will also help should there be a need for remedial action as it allows for swift and economical targeting. It will help with capacity planning and for management information, reporting data based on actual facts rather than hearsay or ‘wetted fingers’.

Using comprehensive tools that monitor infrastructure, applications and capture packets will give the IT teams visibility and control of their systems. It will also give them the ability to pull disparate data from separate monitoring components. With this in place the IT department will be able to predict where problems may arise, spot unusual activity and be able to pinpoint and fix problems the moment they occur rather than using resources on time consuming forensic IT analysis after the fact. With less downtime and quicker response times the business, end users and the IT department all win.

IT and business must communicate, communicate, communicate. This cannot be stressed in enough. A clear and consistent two-way line of communication from board level down is essential. When business strategies, tactics and targets are defined everyone, including IT, should be comfortable that they are achievable, planned out and have a clear timeline.

IT not only keeps the lights on but impacts the bottom line. If IT understands what the business needs and vice versa, there is a far greater chance of the two succeeding together.

Heptio Raises More Money

Innovation is the order of the day in tech industry and this explains why startups and small businesses in the cloud space are getting good funding to pursue their ideas. The latest in this regard is a company called Heptio that has raised $25 million in a Series B funding that was led by Madrona Venture Partners, Lightspeed Venture Partners and Accel Partners.

This round of funding comes just within a year when Heptio got $8.5 million in a series A funding. If you’re wondering about the seed money, this Seattle-based company didn’t raise any because it didn’t have a need for it.

According to the CEO and co-founder,  Craig McLuckie, the last eight months has been amazing for the company as they didn’t expect to raise another round of money within such a short time.

So, what does this company do to attract so much funding?

Heptio helps companies to realize the true power of Kubernetes, an open-source system that automates deployment, scaling and management of containerized applications. Essentially, this system helps to group containers based on the application’s logical units, so that management of these applications is easy.

Heptio specializes in bringing Kubernetes and other cloud-native technologies to enterprises by creating new workflows that make this adaptation easy. In other words, it offers professional services for enterprises that want to bring in Kubernetes into their existing system, along with the necessary training and support.

If you look at it from a broader perspective, Heptio is not just helping companies make the most of Kubernetes, but is bringing them closer to the open-source community. And probably that’s what is making this company unique and that’s also what’s attracting investors and customers to it.

When the company first started this project, a lot of things were unclear. But over the last few months, this project has got a defined direction and the business model is sensible as well.  With a defined set of goals, this company has specific plans for this round of funding. It wants to expand to Europe and Asia and maybe even make new partnerships and acquisitions to reach out to new markets.

Let’s hope companies like Heptio are successful in steering companies towards the open-source community.

The post Heptio Raises More Money appeared first on Cloud News Daily.

Oracle joins Cloud Native Computing Foundation in further push to Kubernetes

Oracle has announced it has joined the Cloud Native Computing Foundation (CNCF) at the platinum level, boosting its push for Kubernetes with new open source product releases.

The foundation, whose role is to help sustain containers and microservices architectures, said Oracle’s ‘key role will help define the future of enterprise cloud.’

“CNCF technologies such as Kubernetes, Prometheus, gRPC and OpenTracing are critical parts of both our own and our customers’ development toolchains,” said Mark Cavage, vice president of software development at Oracle. “Together with the CNCF, Oracle is cultivating an open container ecosystem built for cloud interoperability, enterprise workloads and performance.”

Oracle becomes the third such vendor to sign up to the CNCF in a matter of weeks, after Amazon Web Services (AWS) confirmed its participation earlier this month and Microsoft did so in July. The cast list of the CNCF now reads like a who’s who of cloud computing, with Oracle the last holdout among the first and second tier players.

Alongside this, Oracle is releasing Kubernetes on Oracle Linux, as well as open sourcing a Kubernetes installer for its cloud infrastructure. “Developers gain unparalleled simplicity for running their cloud native workloads on Oracle,” as the company put it.

This is one of various initiatives Oracle has recently been putting into place regarding open source. The company announced in June it was making investments into Kubernetes, with a blog post from the developer team saying at the time: “Oracle is investing in Kubernetes first and foremost as a way to deploy and operate our new cloud services. We think our understanding of operating Kubernetes will translate into value for the community as we turn our real-world experience into action.” In the same month, Oracle also announced three new open source container utilities.

The company’s most recent financial results, in June, saw total cloud revenues hit $1.36 billion (£1.06bn), or 13% of overall revenue, with Larry Ellison predicting its platform as a service (PaaS) and infrastructure as a service (IaaS) businesses will outperform the software arm in due course.

With Q118 earnings set to be announced later today, Wallace Witkowski, writing for MarketWatch, said the company is “expected to mark a major milestone in its transition from traditional software sales to the cloud.”

Editor’s note: This story will be updated later with the announcement of Oracle’s financial results.

[session] Are Your Business Apps Cloud-Ready? | @CloudExpo @CASTHighlight #DX #API #Cloud

The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators.
In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most of the time we are dealing with complex portfolios, many including hundreds of legacy applications.

read more

Two in three DevOps engineers in US make $100k, argues new Puppet survey

If you want to get ahead – and get better paid – in the cloud game, then chuck in the sysadmin role and become a DevOps engineer instead.

That’s the primary finding from Puppet’s 2017 DevOps Salary Report, which finds that 66% of DevOps engineers and 69% of software engineers in the US take home pay packets of more than $100,000 per year – up 2% and 3% respectively from the year before – while sysadmins on six figures were only at 31%.

Naturally, this disparity lent itself among the 3,200 technology professionals polled to job titles themselves. DevOps engineer was the most popular overall with software engineer in second place, with the roles reversed for the US. System administrator was the fifth most cited occupation, behind system developer or engineer and architect.

Not surprisingly, more experienced respondents were more likely to be earning the bigger bucks, with more than half (56%) in the industry for between 15 and 20 years making more than $100k, a figure that rises to 63% for 20 years’ experience or more.

When it came to the number of servers employees were responsible for, there was a general trend of bigger is better. Only 27% of those managing fewer than 100 servers earned six figure salaries, compared with 52% for those managing 100,000 or more. Only 6% of respondents identified themselves as female with a ‘small number’ identifying as non-binary – a rise, albeit small, on the previous year’s survey.

“As more enterprises fundamentally change the way they deliver IT services and software to users around the globe in support of digital transformation efforts, they are also challenged with finding the right talent to help increase deployment speed and innovation,” said Alanna Brown, Puppet director of product marketing. “To address these issues, they are adopting new processes, technologies and cultural norms to keep pace with the rapid rate of change.

“This year’s salary report reveals that organisations are investing more heavily in talent and positions that better support DevOps as they rush to transform their businesses and remain competitive,” added Brown.

You can read the full report here (registration required).

VMworld 2017: NSX Cloud, AppDefense + VMware’s New Direction

Enterprise Consultant, Chris Williams recently returned from VMworld 2017 and gives his take on a few of exciting announcements made at the event. AppDefense, VMware’s newest security solution monitors the steady state of servers and stops infiltration at the application layer. It’s a cloud offering rather than an on-premise based solution. VMware also announced NSX Cloud which allows you to employ a security policy once but also deploy it everywhere, providing companies with a common networking and security model across clouds. To learn more about the key news from VMworld and hear from an experienced technologist, check out the video above.

By Jake Cryan, Digital Marketing Specialist

Michigan School District moves to the cloud

It’s not just businesses that are moving to the cloud, but almost every organization across all spheres of work are looking to make the most of what cloud computing offers. In fact, school districts are increasingly moving to the cloud as it helps them to make the best use of their resources, not to mention the improved connectivity and better reach that comes with it. The latest school district to make this transition is the Michigan school district.

More than a dozen schools in the southwestern Michigan are undergoing a transition to the cloud, so the district could thousands of dollars in a single year. At a time when budget crunches are impacting the way education is imparted to children, this move could potentially improve the facilities and maybe even bring in more qualified teachers to give a great learning experience to the children in these districts.

The best part about this transition is that most teachers and students don’t even know that the underlying infrastructure is being upgraded – that’s really how smooth it is.

Much of this easy transition can be attributed to the fact that the Michigan school district is moving only one application at one time. For example, Moodle, the learning system used by the schools in Kalamazoo Regional Educational Service Agency (KRESA), was one of the first applications to be moved to the cloud.

With the successful transition of this application, the others are likely to follow soon. The entire move is handled by Southwest MiTech, an IT consortium that handles all tech related work in schools located in the Kalamazoo area. This consortium handles everything from purchasing computers to deciding on infrastructural changes. Currently, it provides support to 12 schools and four charter school in this area.

For this transition, the Michigan School District has decided to go ahead with AWS. Though the infrastructure manager of Southwest MiTech is a fan of Microsoft’s products, he still chose AWS over Azure because of a combination of advanced infrastructure, features and availability.

So far, they have completed about 15 percent of the transition, but the consortium and the Michigan School District expect the rest of the transition to be smooth as well. In an interview, the consortium opined that it is the initial start that’s tough because of the potential hiccups that can arise. So, they wanted to start small and take cautious steps. But now that the initial transition is done., we can expect the rest of the transition to speed up.

Overall, this is a sensible move by the Michigan school district and we hope that more such school districts take a proactive approach to move their applications to the cloud, so it can benefit everyone, especially the young children.

 

The post Michigan School District moves to the cloud appeared first on Cloud News Daily.

AWS, VMware and enterprise cloud adoption maturity: VMware Cloud on AWS

According to IDC, only 25% of organizations have repeatable strategies for cloud adoption, with 32% having no cloud strategy at all, describing the need for a best practice based repeatable framework for planning cloud adoption that drives business success.

This Forbes posting from Joe McKindrick also references this research, describing that “only about one in seven organizations with multiple cloud workloads (14%) actually have managed, or optimized cloud strategies. The largest segment, 47%, say their cloud strategies tend to be on the fly — opportunistic, or ad hoc”, and that “only a somewhat larger group, 11%, were at the next-best level, “managed,” in which their enterprises are “implementing a consistent, enterprisewide best-practices approach to cloud;” and “orchestrating service delivery across an integrated set of resources.”

Vendors like AWS and VMware offer ready to use best practices that can help plug this gap.

AWS: Enterprise cloud adoption maturity

These challenges correlate with a simple adoption planning model offered by Stephen Orban, head of enterprise strategy at Amazon AWS and previously CIO of Dow Jones.

From his experiences enterprise organisations progress through four main stages of enterprise cloud adoption maturity, consistent with the IDC research:

Organising for the cloud: Building a cloud centre of excellence

In VMware’s whitepaper ‘Organizing for the Cloud’ (30-page PDF) they say the key to this transformation of IT is the implementation of a ‘Cloud Operating Model’. Central to this blueprint is that the IT team should become a cloud service broker, an incremental step up in a maturity model that they describe as a cloud capability model.

They also describe that creation of a ‘Cloud Centre of Excellence’ is the best way to achieve the required changes to the IT organisation itself. This COE should create an online knowledge base of best practices, and defining job roles and responsibilities, such as cloud leader, architect, analyst, administrator and developer and a service catalog manager among others.

Having implemented this matrix of new capabilities the IT team can then seek to identify and achieve the organisational improvements that will be of value to their business, such as:

  • Faster response to business needs
  • Faster incident resolution
  • Improved infrastructure deployment coordination
  • Improved ability to meet SLAs

Fundamentally what VMware recommend that is the headline message of Enterprise Cloud is that it will achieve an increased focus on higher value initiatives.

IT value transformation

The headline resource from VMware to answer this question is this study commissioned from the IT Process Institute, their white paper: ‘IT Value Transformation Roadmap‘ 3 (24 page PDF).

In this document they offer a blueprint for a Cloud Maturity Model, a ladder of maturing capability that you can compare your organisation to, and use as a framework to plan your own business transformations, where:

“This cloud computing strategy brief presents a virtualisation- and private-cloud-centric model for IT value transformation. It combines key findings from several primary research studies into a three-stage transformation road map.”

In short this is an ideal strategy blueprint for any existing VMware customers. It proposes a three step maturity model that begins with virtualisation and grows into full utilization of cloud computing across three stages of:

  • IT production – Focus on delivering the basics and proving value for money.
  • Business production – Utilise technology to better optimise business processes.
  • ITaaS – Fully embrace utility IT as a Service, and leverage technology for enabling new service innovation.

This corresponds with an increasing maturity in the use of virtualisation, SaaS and other cloud architecture principles and external services, that begins with where most customers are now, mostly halfway through phase one.

Becoming a transformational leader: Start your journey

It also corresponds with a journey for the CIO as well; from operational manager of a cost centre with poor value for money perceptions, through to a boardroom-level change agent who is directly driving new profit-making initiatives.

Specifically the paper makes the point that this evolution results in the CIO being recognised for delivering strategic IT value:

What is strategic IT value? Strategic IT value is demonstrated when IT plays a key role in a company’s achievement of overall business strategy. In other words, when IT is keenly focused on business outcomes and plays a significant role in optimising and improving core value chain processes. Or, when the IT organisation drives innovation that enables new technology-enabled product and service revenue streams. When IT is effective, results can be measured by improved customer satisfaction and market share gains.

In contrast many CIOs can find themselves in somewhat of an operational corner – responsible for keeping the lights on but perceived as a poor value-for-money cost base for doing so. The IT Process Institute describe how CIOs can break this constraint cycle and shift from a cost focus to delivering strategic value for the business, through this three step progression.

VMware Cloud on AWS

In Taming the Digital Dragon McKinsey describe the hybrid cloud model as the blueprint for digital transformation, and AWS and VMware have released a major innovation to accelerate its adoption.

Announced on 28 Aug 2017 Amazon has launched VMware Cloud on AWS. With this update, VMware’s Software-Designed Data Center (SDDC) can now be used on Amazon’s AWS infrastructure, enabling users to run VMware applications across consistent public, private, or hybrid vSphere-based cloud environments, while also having optimized access to AWS services. This service was designed to support popular use cases, including data centre extension, as well as application development, testing, and migration.

The post AWS / VMware Enterprise Cloud adoption maturity – VMware Cloud on AWS appeared first on CBPN.

The evolution of phishing: Reeling them in from the cloud

Awareness of phishing has grown significantly in recent years, and users are more suspicious than ever of emails that land in their inbox from unknown or questionable senders. In response to this, cybercriminals have had to become savvier with their phishing tactics. They’ve looked to new methods of phishing that are harder for users to expose. The latest of these phishing tactics uses spoofed cloud applications – a new trend that businesses need to watch out for.

Early phishing

Phishing was once all about simplistic deception. A cybercriminal would pose as, for example, a government official or customer service representative, and contact an unknowing victim. The victim, wanting to comply with the law or prevent their account being shut down, would happily and unwittingly give over their personal details to the cybercriminal.

However, this form of scam has started to decline in success. As phishing became more and more popular within the threat landscape, user awareness and understanding about it increased. Users are now less likely to openly share personal information or open suspicious attachments. They also know to look for poor spelling, grammar or strange email addresses when looking through their inbox. Technology, too, caught up with traditional phishing methods: major email providers now tend to alert users of a questionable email or source domain. Similarly, spam filters block large numbers of phishing emails before they even reach their recipients.

Most businesses are now well equipped to defend themselves from traditional phishing attacks, so phishers have had to think of more innovative ways to trick the average person; phishing has had to become more sophisticated. The motivation of phishing attacks is now also shifting: rather than tricking employees into disclosing financial or personal information, hackers are now far more interested in collecting valid business credentials.

Phishing today

Phishing in the cloud is the newest method used by phishers today. Take this year’s Gmail phishing scam, which impacted an estimated one million accounts. The widespread attack replicated through people’s Gmail contacts when they clicked on a bogus Google Doc that appeared to have been shared by a known contact. Part of what was so startling about the scam was how believable it was; hackers used a deceptively named web app – working from within Google’s system for developers. By calling a malicious third-party app “Google Docs,” the attackers were able to trick people into thinking they were being asked to click on a legitimate document, when in fact they were granting account access to hackers. Hackers could then use this permission to see victims’ contacts, read their emails, track locations, and see files created in G Suite.

This attack underscores the security risks of OAuth, which Google uses to streamline authentication. Through OAuth, users don’t have to hand over any password information. They instead grant permission so that one third-party app can connect to their Internet accounts for, say, Google, Facebook or Twitter.

In the Google attacks, hackers exploited this capability, aware that the user could grant them access to their personal information without even needing to re-enter their login details. As the phishing scam shows, the existence of such protocols makes it easier for users to allow access to third party applications, but in turn, makes it easier for hackers to also get access without needing the credentials themselves.

The Google phishing scam’s success relied on psychological manipulation. By impersonating Google Docs, hackers automatically gained the trust of a number of users – just a small change in how the application domain was disguised successfully convinced users that the application was trustworthy.

Next-gen phishing

Whilst traditional phishing scams now fail to reel in most of us – with their suspect spelling and senders – the Google Docs phishing attack demonstrated how a new breed of cloud phishing can trick even some of the most tech-savvy users. Next-generation phishing will see hackers manipulate user trust further by creating malicious applications disguised as legitimate applications, which users download and use. The widespread adoption of SaaS applications has made this an attractive vector for threat actors, and one that has not yet been exploited to its full potential.

In response to the Gmail attacks, Google implemented a number of new security measures: machine learning, improved email filtering, and malicious URL detection, all of which improve email security. Some providers now even give users a warning when they attempt to reply to an email address that is outside of their corporate domains, which is very useful within the workplace.

Although cloud providers will do their best to prevent and warn users of phishing scams, some individuals will still get hooked on a phisher’s line. Employee training therefore remains the first bastion of defence against phishing attacks. Enterprises should also consider investing in security technologies that can detect these threats as they advance.