Enterprises are fast realizing the importance of integrating SaaS/Cloud applications, API and on-premises data and processes, to unleash hidden value. This webinar explores how managers can use a Microservice-centric approach to aggressively tackle the unexpected new integration challenges posed by proliferation of cloud, mobile, social and big data projects.
Industry analyst and SOA expert Jason Bloomberg will strip away the hype from microservices, and clearly identify their advantages and disadvantages. He will then discuss the role microservices play in cloud-enabled enterprise integration. Finally, he will connect microservices to SOA and explain how microservices represent an update of the best elements of the SOA approach.
Archivo mensual: mayo 2015
Salesforce bags $1.5bn in Q1 2016, on track for $6bn annual run rate
CRM giant Salesforce announced another record quarter this week as it took home over£1.5bn in revenue for Q1 2016, up from £1.44bn the previous quarter. The company claims it is on track to become the first pure-play enterprise cloud company to surpass the $6bn annual run rate mark.
At £1.51bn for the quarter revenue is up 23 per cent year-on-year and the company also reported deferred revenue of $3.06bn, up 31 per cent year-on-year.
Salesforce also raised its fiscal year 2016revenue guidance to £6.55bn, up from £6.52, and said it is on track to be the first pure-play enterprise cloud company to surpass the £6bn annual run rate mark. Full fiscal year 2015 revenue was $5.37bn.
“Salesforce has surpassed the $6 billion annual revenue run rate faster than any other enterprise software company, and our current outlook puts us on track to reach a $7 billion revenue run rate later this year,” said Marc Benioff, chairman and chief executive officer, Salesforce.
“Our goal is to be the fastest to reach $10 billion in annual revenue,” Benioff said, echoing his call-to-arms from the previous two quarters.
Salesforce has recently been the subject of a series of rumours suggesting its potential acquisition by another enterprise technology firm, although Salesforce has repeatedly denied commenting on the speculation. If the rumours are true it’s almost certain another record fiscal quarter would send the asking price to even greater, eye-watering heights.
Leeds Building Society targets customer engagement with HP deal
Leeds Building Society is to revamp its customer engagement tools through a ten-year deal with HP Enterprise Services, which will encompass a number of independent software vendors working on different parts of the business. The deal builds on the earlier deal between the two firms in 2013, which focused on moving the building society’s core banking platform to the cloud.
Under the 10-year agreement, HP Application Transformation Services will work with independent software vendors TIBCO, Numéro and Infor to provide Leeds Building Society with customer engagement capabilities hosted in an HP Helion managed virtual private cloud environment. This will help the society streamline its mortgage and savings processes, making it easier to grow market share and penetrate new market segments.
The deal has several parts. Omni-channel customer experience management specialist Numéro will provide contact management capability for new customer communication channels. The idea is to ensure the building society can offer support across any communications channel, without the customer having to start the process again. Infor’s multi-channel, interactive campaign management solution, Infor Epiphany, will help the building society to offer customers personalised communications, allowing the society to strengthen individual customer relationships. HP Exstream will provide customer communication (such as statements, notices and renewals) through customers’ preferred channel. TIBCO ActiveMatrix BPM software will digitise its business processes, systems and applications.
“Like all financial institutions, our future is dependent upon delivering the right services for current and future customers,” said Tom Clark, chief information officer, at Leeds Building Society. “ICE represents the cornerstone of our long-term strategy to deliver significant productivity and customer communication channel improvements while reducing costs and meeting regulatory requirements. HP already hosts our core application for mortgages and savings and, with a proven track record of delivering large-scale hosted services and innovative technology, can help us to achieve our business objectives.”
Leeds Building Society joined the shared services alliance founded by HP Enterprise Services and the Yorkshire Building Society in September 2013, in a deal that saw the society move its core application for mortgages and savings to the cloud. The deal also marked a growing recognition among the UK’s mid-tier institutions of the power of cloud to help them move with the times.
HP’s original deal with the Yorkshire Building Society involved shifting the building society’s core mortgage and savings application to the cloud. That in turn enabled the Yorkshire to effectively offer its automated mortgage sales, lending and savings account processing product as a white labelled solution to other financial institutions (which it had been doing for years), through HP.
The Leeds Building Society is the fifth largest of its kind in the UK, with assets of £10 billion. Founded in 1875, the society has approximately 703,000 customers and 65 branches in the UK, with 29 in Yorkshire and a branch each in Dublin and Gibraltar.
Reader Question: NSX Riding on Physical Infrastructure?
There’s been a lot of traction and interest around software defined networking lately. I posted a video blog last week comparing features and functionality of VMware NSX vs. Cisco ACI. A reader left a comment on the post with a really interesting question. Since I have heard similar questions lately, I figured it would be worth addressing it in it’s own post.
The question was:
“Great discussion – one area that needs more exploration is when NSX is riding on top of any physical infrastructure – how is the utilization and capacity of the physical network made known to NSX so that it can make intelligent decisions about routing to avoid congestion?”
Here was my response:
“You bring up an interesting point that I hear come up quite a bit lately. I say interesting because it seems like everyone has a different answer to this challenge and a couple of the major players in this space seem to think they have the only RIGHT answer.
If you talk to the NSX team at VMware, they would argue that since the hypervisor is the closest thing to your applications, you’d be better off determining network flow requirements there and dictating the behavior of that traffic over the network as opposed to reactive adjustments for what could be micro-burst type traffic that could lead to a lot of reaction and not much impact.
If you were to pose the same challenge to the ACI team at Cisco, they would argue that without intimate visibility, control and automated provisioning of active network traffic AND resources, you can’t make intelligent decisions about behavior of application flows, regardless of how close you are to the applications themselves.
I think the short answer, in my mind anyway, to the challenge you outline lies within the SDN/API integration side of the NSX controller. I always need to remind myself that NSX is a mix of SDN and SDN driven Network Virtualization (NV) and Network Function Virtualization (NFV). That being the case, the behavior of the NSX NV components can be influenced by more than just the NSX controller. Through mechanisms native to the network like Netflow, NBAR2, IPFIX, etc. we can get extremely granular application visibility and control throughout the network itself and, by combining that with API NSX integration, we can evolve the NSX solution to include intelligence from the physical network thereby enabling it to make decisions based on that information.”
Like I said, an interesting question. There’s a lot to talk about here and everyone (myself included) has a lot to learn. If you have any more questions around software defined networking, leave a comment or reach out to us at socialmedia@greenpages.com and I’ll get back to you.
By Nick Phelps, Principal Architect
Cambridgeshire ICT Service Chooses Parallels 2X RAS to Ensure Network Availability
Logo courtesy of Cambridgeshire ICT Service. “The single most important benefit of using the software has been that it has fitted in seamlessly with our existing infrastructure without the need to retrain staff or change systems management software.” – Leonard Veenendaal, Technical Services Manager, Cambridgeshire County Council. The Challenge Cambridgeshire ICT service makes […]
The post Cambridgeshire ICT Service Chooses Parallels 2X RAS to Ensure Network Availability appeared first on Parallels Blog.
Potential of the ‘Internet of Things’ By @GreenwaveSys | @ThingsExpo [#IoT]
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate.
In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT opportunities and challenges, and will ensure a lively Q&A conclusion.
CA To Present At @DevOpsSummit | @CAinc [#Agile #DevOps #APM #API]
Speed. Quality. Innovation. That is what you are tasked with day in and day out… delivering superior user experiences that excite and engage your customers. But every benefit of beating your competition to market disappears if the app fails to perform in production. With development now happening in sprints, testing can no longer be sequential. Quality must be a focal point at every stage in the SDLC.
In his session at DevOps Summit, Scott Edwards, Director of Product Marketing at CA Technologies, will discuss how to “shift quality left” to make QA more than just a single step in the SDLC. By automating the creation of test and virtual services, and enabling access to realistic test data and scenarios early in the SDLC, you can drive a pervasive, continuous quality process integrated earlier throughout the development process.
Suffering a cloud outage? Look closer to home for the potential cause
(c)iStock.com/lolostock
If your systems are down, and you’re just about to get on the phone to tear a strip from your cloud vendor, remember this: technical errors in failed cloud implementations are more likely to come from the user organisation itself than the supplier.
That was the surprising finding from a report recently released by The Economist Intelligence Unit, with 36% of respondents saying errors were more likely to come from within than outside (29%). The report noted commercial errors were the most common type of supplier failure, with a third of respondents said they were unaware of any failures in the cloud infrastructure they used.
Public cloud services were more likely to throw up technical failures than those using the private cloud, according to the report. Downtime is most likely to come from what the report called ‘significant outages’ (23%), as well as failure to integrate with existing systems (20%) and data breaches (17%). Inevitably, a skills shortage was also blamed for exacerbating, if not directly causing, technical issues, as well as a lack of business continuity and disaster recovery planning.
The report notes: “It would be misleading to state that public cloud is always riskier. In the early days of the cloud, users may have experienced greater security issues since the technology was not yet mature and because of their own inexperience. Conversely, the bespoke nature of private clouds allows for a greater level of security, though with possibly higher initial costs.”
In a more positive vein, the survey showed that when cloud failures did occur, they were rarely catastrophic, with only 9% of those polled saying their incidents were “high” risk overall. 34% opted for “medium”, while 55% said damage was “limited.” Significantly, the loss of customer data was the biggest fear executives had over a failed cloud implementation.
The report concludes by assessing the maturation of cloud computing, calling it a “core component of the IT landscape”, yet arguing there is still more work that needs to be done, in terms of skills implementation and improved disaster recovery strategies. Businesses, wherever possible, need to be proactive, not reactive – and given Databarracks recently argued disaster recovery as a service (DRaaS) was going to be the most important cloud service in 2015, many seem to be on their way.
CERN and Rackspace Team Up Again
The European Organization for Nuclear Research (CERN) and managed cloud provider Rackspace have been working together since 2013 when Rackspace created an OpenStack-based hybrid cloud set-up for CERN. Now, the two organizations are working together again to create a multi-cloud, collaborative work environment for CERN’s global research teams.
So far, the reference architecture and operation models have been created in order to better manage the cloud environments. Identity authentication tools have also been made to cover multiple OpenStack clouds. This model allows the data obtained at CERN to be shared with all of their overseas research teams.
The amount of data collected when the Large Hadron Collider (LHC) is running is on the petabyte scale, and all of it flows and is stored through OpenStack. The easier this data can be shared among CERN’s researchers and with less technology the better.
To make sure this system flows properly and efficiently, Rackspace has a full time research fellow on location at CERN to provide assistance with design and implementation issues that come up among its OpenStack cloud environments. The open-source software is also used to manage the data center resources that power the LHC, which reportedly produces more than 30PB of data per year.
Using open-source software instead of proprietary software keeps costs low while keeping flexibility high. This is a plus for research labs all over the world, especially those who are under-funded.
The next phase of this partnership is creating standard templates to speed up the creation of OpenStack clouds so that CERN researchers have access to the data sooner.
The post CERN and Rackspace Team Up Again appeared first on Cloud News Daily.
Tesora Adds Features to Enterprise DBaaS | @CloudExpo [#Cloud]
Tesora’s enterprise database as a service, based on OpenStack Trove, adds support for the widely used Oracle 11g and lots more.
Tesora announces an update of its enterprise-hardened implementation of OpenStack Trove database as a service (DBaaS) platform. This release adds support for more databases and OpenStack distributions, as well as new database management features, and deeper integration with the OpenStack Horizon dashboard.