Enterprise Architecture: Ripe for Digital Disruption | @CloudExpo #Microservices

The iteration of constraints and initial conditions that drive and influence self-organization within the enterprise is the actual role of an architect who is architecting emergent behavior – in particular, business agility. You may call such activities something else – management practice or some such – and to be sure, we must reinvent management practice along the same lines as EA. But whatever we call it, there needs to be an understanding that creating the conditions that lead to effective self-organizing teams is itself an architectural activity, an activity separate from the architectural activities such teams undertake when their goal is to implement a software system.

read more

Digital Transformation: Leading a Learning Organization

Forward-thinking CEOs ensure that their organization has access to the latest digital business technologies. But in today’s global networked economy, that’s not enough. If you believe that digital transformation knowledge is power, then fully mastering the ability to apply your IT capabilities into actionable wisdom is infinitely more potent.

The Harvard Business Review (HBR) market research team recently completed a global study of the path that several organizations have taken to develop and deliver the digital learning support resources that their key internal stakeholders both need and want.

read more

The Internet Part of ‘Internet of Things’ | @ThingsExpo @Dyn #Microservices

Internet ready devices are becoming increasingly common, yet architecturally they often still act like point-to-point networks. Meanwhile, containers are making rapid deployments across networks painless, but knowing where and when to deploy them to meet market demand is just as critical.
In his session at 17th Cloud Expo, Matt Torrisi, Customer Success Operations at Dyn, will discuss what the public internet in the middle really looks like, and how operators can build their networks for better performance and security.

read more

The verdict on city council disaster recovery plans: Good, but could do better

(c)iStock.com/natasaadzic

The good news: all of the UK’s major city councils have disaster recovery (DR) plans in place. The bad news: two in five will not have tested them within the past 12 months.

The figures come amidst a new Freedom of Information (FoI) request from disaster recovery specialists Databarracks, which found that 38% of city councils in the UK’s major cities are not regularly testing their DR systems. Databarracks argues this is more of an issue with public sector organisations; according to the firm’s 2015 Data Health Check, only 21% of large private companies had failed to test their disaster recovery over the past year.

Similarly, there wasn’t a consensus between city councils for recovery time objectives (RTOs); while some bodies could recover their systems within a few hours, others took up to four days.

This relates to previous research conducted by Databarracks which came to a similar conclusion. In May, with the General Election around the corner, the disaster recovery provider argued many councils in London hadn’t tested their systems in at least a year, and again offered wildly fluctuating RTOs for electoral data, ranging from 24 hours to two weeks.

Peter Groucutt, Databarracks managing director, argued the majority of councils were prioritising council tax alongside other business critical functions, which is a good sign – yet it wasn’t that way across the board, with one council’s prioritisation of ‘car parking’ being described as “questionable” by Groucutt.

Groucutt said: “The results of our FoI request exposed that a significant proportion of city councils had not tested plans for over a year, meaning that they cannot be confident in their effectiveness in the event of a genuine crisis.” He added: “It is encouraging to see that all city councils have thorough DR plans in place, but that’s only half the job.

“To guarantee effectiveness, regular DR testing must be performed and plans must be constantly updated.”

Accelerating cloud storage and reducing the effects of latency

(c)iStock.com/ktsimage

Over a number of years there has been a long and hard fought battle to secure the ability to‘accelerate anywhere’any data type to, from and across a cloud area network (ClAN) to allow fast access to applications, or to secure data as a part of a back-up and archiving strategy. According to Claire Buchanan, chief commercial officer at self-configuring infrastructure optimise networks (SCION) vendor Bridgeworks, this battle is still ongoing. With the use of traditional WAN optimisation techniques the long drawn out battle has still to be won.

“It may not long be the case as with the advent of machine intelligence and technologies such as SCION, The problem has been that of small pipes and the inability to accelerate data.  Therefore, the use of deduping and compression tools have been the only way to gain a perceived performance improvement”, she explains. 

With this in mind Tony Lock, programme director at analyst firm Freeform Dynamics, advises people to closely scrutinise the available WAN acceleration solutions against their business level requirements for WAN network performance. “They need to match them to what is currently being delivered combined with assessing what improvements can be achieved”, he adds.

Yet larger network connections or ‘pipes’ of 100Mb/s ,1Gb/s  and greater are becoming the norm. So Buchanan therefore thinks that the main challenge has changed to one of how to fill the pipes in order to maximise the utilisation of the network rather than minimising the amount of data sent. “With SCION this can be achieved and the network performance battle can be won”, she claims. With SCION she argues that the traditional problems relating to WANs are flipped on their head, because she says the technology works “sympathetically with TCP/IP in tandem with its strengths whilst overcoming its greatest inhibitor – latency.”

Mitigating latency

Mitigating latency is a crucial challenge because it can slow down the transmission of data to and from   public, private and hybrid clouds. It can make back-up and disaster recovery more challenging than it really need be. Buchanan therefore argues that this can, however, be resolved by radically reducing the effects of latency. Performance and utilisation can climb up beyond 90% to allow data to move close to the maximum capability of an organisation’s bandwidth. This in turn will make it easier for customers to move data to the cloud, and they will gain the ability to spin up at will and spin down their servers were it is deemed appropriate.

Lock adds: “Everything depends on the nature of the applications and business services being run over the communications links, and so the tip is to ensure that you really understand what is needed to warrant that the business gets the service quality it needs from the user of the application.” He says it is also essential to make sure that IT understands how to manage and administer it over its working life, which could be over the course of many years. “The key is to put in place good management tools and processes – especially if these are new operations to IT”, he suggests.

Data deduplication

In many cases machine learning technologies such as SCIONs will limit the need for human intervention and enable the more efficient management of network performance. Yet Buchanan says deduplication has traditionally always been an “acceptable way to move data where pipe size is a limitation and chatty protocols are the order of the day, but it is heavy on computational power, memory and storage.” She therefore advises organisations to ask questions:

  • What is the hidden cost of WAN optimization: What is the cost of the kit to support it? As one technology starts to peak at, for example, 1Gb/s you have to look at the return on investment. With deduplication you have to look at the point where the technology tops out as performance of the technology flattens off, and the cost benefit ratio weakens. Sometimes it’s better to take a larger pipe with different technology to get better performance and ROI.
  • Are the traditional WAN optimisation vendors really offering your organisation what it needs?  We are now seeing Vendors other than WAN Optimisation vendors, that are increasingly using deduplication and compression as part of their offering  As it’s not possible to “dedupe data already deduped”. This means that the traditional WAN optimisation tools simply pass the data through untouched and therefore no performance improvement. 
  • What will traditional WAN optimisation tools become in the new world of larger pipes?  Lock adds that “data deduplication is now reasonably mature, but IT has to be comfortable that it trusts the technology and that the organisation is comfortable with the data sets on which it is to be applied.” He also says that there are some industries that may require a sign-off by auditors and regulators on the use of deduplication on certain data sets.

Fast restoring

Organisations that what to fast restore encrypted data from off-site facilities need to consider the network delays caused by latency. “This is coloured by IT executives thinking with regards to the location of their secondary and tertiary datacentres, and so they have sought to minimise time and perceived risk by locating their datacentres within the circle of disruption”, says Buchanan.

She adds that distance is normally a reflection of latency as measured by milliseconds, but this apparently isn’t always the case dependent on the network. The laws of physics doesn’t allow for latency to be eliminated, but they can be mitigated with SCION technologies. She argues that SCIONS can enable organisations to move encrypted data just as fast as anything else because it doesn’t touch the data and is therefore data agnostic.

Lock advises that there are many factors that have to be considered, such as the location of the back-up data, the resources available (network, processors, storage platforms and so on) to perform the restoration of the encrypted data. “However, the long-term management of the encrypted keys will certainly be the most important factor, and it’s one that can’t be overlooked if the organisation needs large scale data encryption”, he explains.

With regards to SCION he says that traditional WAN networks have been static: “They were put in place to deliver certain capacities with latency, but all resilience and performance capabilities were designed up-front and so the ideas behind SCION, looking at making networks more flexible and capable of resolving performance issues automatically by using whatever resources are available to the system – not just those furnished at the outset is an interesting divergence.”

Differing approaches

According to Buchanan the traditional premise has been to reduce the amount of data to send. “In contrast SCION comes from the premise of acceleration, maximising the efficiency of the bandwidth to achieve its ultimate speed”, she explains.

In her opinion the idea is that by paralleling data on virtual connections, filling the pipes and then using machine intelligence to self-configure, self-monitor and self-manage the data that is controlled from ingress to egress ensures optimal performance as well as optimal utilisation and the fastest throughput speed possible.

Cloud: Nothing special

Both Lock and Buchanan agree that there is nothing special about the cloud. In Buchanan’s view it’s just one more choice that’s available to CIOs within their global strategy. “From a data movement perspective the fact remains that whatever strategy is chosen with regards to public, private or hybrid cloud, the underlying and fundamental problem remains – that being how to get your data to and from whichever location you have chosen without impediment”, she explains.

She adds that IT is under pressure to deliver a myriad of initiatives, whether that is cloud or big data, IoT or digital transformation: “Couple that with the data deluge that we are experiencing as shown by IDC’s prediction that there will be 40 ZB of data by 2020, and so there is a high mountain to climb.” For this reason she argues that organisations need to find smart ways to do things. This is crucial if organisations are going to be able to deliver better and more efficient services over the years to come. It’s time for new approaches to old problems.

Become smarter

Most of the innovation is coming from SMEs and not large corporate enterprises.  “Small companies are doing really clever things that flip old and established problems on their heads, and this kind of innovation only really comes from SMEs that are focused on specific issues – and as we all saw in 2008 with Lehman Brothers long gone are the days when being big meant you were safe”, she argues.

She therefore concludes that CFOs and CIO should look at SCION solutions such as WANrockIT from several angles such as cost optimisation by doing more with their existing pipes. Connectivity expansion should only occur if it’s absolutely necessary. With machine intelligence it’s possible to reduce staffing costs too because SCIONs require no manual intervention. SCION technology can enable organisations to locate their datacentres for co-location or cloud anywhere without being unhinged by the negative effects of network latency.

In fact a recent test by Bridgeworks involving 4 x 10Gb connections showed that the data was moving at 4.4GB per second, equating to 264GB per minute or 15,840GB per hour. So SCIONs open up a number of opportunities for CFOs and CIOs to support. In essence they will gain better service at a lower cost. However, Lock concludes that CFOs should not investigate this kind of proposition alone. The involvement of IT is essential to ensure that business and service expectations are met from day one of the implementation of these technologies. Yet by working together, CFOs and CIOs will be able to accelerate cloud storage by mitigating latency.

OpenStack claims Project Navigator will lead adopters through development hell

openstack tokyo summitThe OpenStack Foundation’s chief operating office Mark Collier has lifted the lid on Project Navigator, a scheme to help users see their way through the myriad of component projects, different levels of software maturity and documentation involved in taking part in the project.

The involvement of 200 vendors in the open source project is both a strength and a weakness in such a large complex community with diverse projects, according to Collier, speaking at the OpenStack developer conference. Project Navigator aims to help companies chart a course more easily, he said.

“It’s good to have options but it can be overwhelming,” said Collier, “we have over two dozen different services now that you can put into production. There’s a small number of projects that every cloud uses, but there are quite a few projects that give you optional services.”

These days users need help in making sense of the various projects before they can progress, said Collier. The project aims to offer intelligence, drawn from a number of sources, to help them make quicker and faster decisions.

Users know a little about a lot of projects but can rarely have complete information, Collier said. In response, the foundation has gathered metadata about various projects, on everything from their breadth of adoption to the documentation to the age and will publish this on its web site.

Collier admitted this would be a best effort. “We can’t fly everywhere and talk to everyone. It makes more sense to distil it down and make it digestible online,” said Collier.

The objective of the Navigator tool is to educate users about the core set of services that they’ll need in any cloud and provide a clear delineation between those and the services that are optional.

In a related support development, the OpenStack Foundation announced the launch of a certification program for OpenStack cloud admins. Like Project Navigator, the scheme is a formal recognition of the growing complexity of OpenStack. The large number of sub-projects make it hard for businesses to find qualified administrators before they can adopt the technology.

OpenStack COO Mark Collier said similar certifications are planned for OpenStack developers and other roles in the project.

Meet the Parallels Team at Microsoft Future Decoded

The Parallels team is so excited to be a Bronze Sponsor at the Microsoft Future Decoded conference in London, UK, on November 10-11 (Booth #16)! There we’ll demo the latest versions of Parallels Remote Application Server, Parallels Desktop for Mac Business Edition and Parallels Mac Management. Register here before October 30th, 2015 to have the possibility to win a […]

The post Meet the Parallels Team at Microsoft Future Decoded appeared first on Parallels Blog.

Oracle announces new levels of cloud, mobile and IoT integration in its Cloud Platform

Oracle openworld 2015Oracle has announced at OpenWorld a ‘comprehensive’ suite of integration services to help clients connect their cloud mobile and IoT systems into the Oracle Cloud Platform.

The Oracle Cloud Platform for Integration portfolio now includes Oracle’s IoT Cloud, Integration Cloud, SOA Cloud and API Manager Cloud range of services.

Oracle says its Integration Cloud is ideal for non-technical users such as citizen integrators, the applications staff in IT departments and line of business managers who need to integration software as a service (SaaS) applications. To this end it comes with a simple, intuitive Web-based, point-and-click user interface.

On the other end of the technical competence spectrum, Oracle’s SOA Cloud was designed for integration developers. It provides a full integration platform, including service virtualisation, process orchestration, B2B integration, managed file transfer and business activity monitoring dashboards. In accordance with the more detailed nature of the work of the typical user it has fine-grained control and the capacity to support various use cases.

Oracle’s integration cloud services are fully portable, it claims, so that users can switch their integration workloads between on-premise and the cloud, as business requirements change.

The IT architectures that organizations have relied on for decades are too rigid and inflexible for the digital age, according to Amit Zavery, Oracle’s senior VP of Cloud Platform and Integration products at Oracle. “Organisations need to rethink API management and service integration across cloud, mobile and IoT initiatives,” said Zavery.

Oracle Cloud Platform’s suite of integration services will provide the flexibility to allow them to adapt, which will boost productivity, slash costs and catalyse the inventive processes, Zavery argued.

Oracle Internet of Things Cloud Service should make it easy to connect any device, whether it generates or analyses data and extends business processes within enterprise applications, says Oracle. This, it says, will lead to faster development of IoT applications, with preventive maintenance and asset tracking pre-integrated with other Oracle systems such as Oracle PaaS, Oracle SaaS, Oracle JD Edwards, Oracle E-Business Suite, and Oracle Fusion.

Meanwhile, the Oracle API Manager Cloud Service will help developers to create and expose APIs to internal or external consumers quickly but without compromising on security, according to the vendor.

The Oracle Cloud Platform is part of the Oracle Cloud. Oracle says its Cloud offering supports 70 million users and more than 34 billion transactions each day and runs on more than 50,000 devices and more than 800 petabytes of storage in 19 data centres around the world.

Mirantis and UCloud in joint bid to tap massive potential of Chinese cloud market

ChinaSoftware and services vendor Mirantis and Chinese cloud operator UCloud have announced a joint venture to speed OpenStack adoption in China’s finance, telecom, state-owned enterprises and large internet businesses.

The venture, dubbed UMCloud, will be led by UCloud CEO and founder Xinhua Ji with head offices in Shanghai, China.

Investment in cloud computing infrastructure in China is estimated by consultancy Bain & Company to be growing faster than overall IT spending and projected to reach $20 billion by 2020, a compound annual growth rate of 40% to 45%. In 2014, China had more than 640 million internet users – more than the USA, India and Japan combined. In 2013, smartphone use in China exceeded 700 million units and 530 million of these Chinese smartphone users accessed the internet from their mobile device.

Cloud computing is a national strategic policy and the government included it in the nation’s 12th Five-Year Plan. Last December, the Ministry of Industry and Information (MIIT) officially declared its intention to support OpenStack ecosystems and to encourage state-owned enterprises to use OpenStack-based cloud products.

California based software and service vendor Mirantis is described as pure-play OpenStack company with installations at AT&T, Ericsson, Walmart and Wells Fargo. It has been funded by a quarter of a billion dollars in venture capital since 2012 and is the second highest contributor of open source software code to OpenStack. Mirantis’ Chinese clients include telco hardware makers Jiesai, Huawei and ZTE.

UCloud is China’s top independent public cloud service provider with clients using e-commerce, gaming, mobile internet and SaaS services, from data centres in China, Hong Kong and the US. The company announced a $100 million Series C financing round in April, with $160 million raised to date.

“China and the United States are two countries where cloud computing is developing the fastest,” said Alex Freedland, president and co-founder of Mirantis, “we see unlimited potential for OpenStack as a major cloud engine in China.”

Identity in Communication | @ThingsExpo #IoT #M2M #RTC #WebRTC

Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it.

read more