Advice for the New On-Call Engineer By @VictorOps | @DevOpsSummit [#DevOps]

There is more to being on-call than just knowing how to type in the latest ChatOps commands, reboot AMIs and print out java stack traces. There are life skills that come from being on-call for a while and fortunately, those are lessons that can be taught.

Here at VictorOps we’re currently adding six new engineers to our on-call roster, so I’ve been thinking about the experience of being on-call and how to make the best of it.

The first day you go on-call can be frightening. The most important thing to remember is that you’ve already passed the first test. You have the trust and respect of your teammates and are providing them with a valuable commodity: peace of mind. No one wants to be on-call, so stepping up to the plate and taking shifts helps to improve the lives of everyone on your team.

read more

Tech Companies Competing To Be The Leader In Cloud Computing

Cloud computing is a technology that allows centralized data storage and online access to computer services and resources through remote servers and networks. Many technology companies have begun to transition from IT resources into cloud computing and are competing to become the world leader. The better-managed companies will be able to make this transition more successfully than those who lack decent management. Here is a description of what some tech companies are doing to make this transition.

 

Cisco

cisco

Cisco’s transition into the cloud has allowed the company to grow further than its traditional routers and switches. Cisco has seen a dramatic increase in earnings from data relevant products and services. They also reported that its Unified Computing System data platform had a surge in users. Cisco is transitioning into software-defined networks (SDN), which will force current models to change their models. However, this will take a long time.

 

IBM

 imb

IBM’s cloud revenue shot through the roof last year, and this month its investors are planning on discussing the next steps to take for its hybrid cloud options. IBM hopes to make all the separate clouds act as one. They believe hybrid cloud arrangements are more appealing to companies and enable connections between new wave Web applications and traditional backend operations.

 

Oracle


oracle 

Oracle’s earnings from cloud growth grew by almost 50% last year. This includes their Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) services. Oracle’s focus has been on growing their cloud business, and its been showing. Mark Hurd, the CEO of Oracle, recently said that within 5-6 years cloud applications will take up to 60% of the total enterprise application market due to the development speed of new functional and easy upgrades.

 

VMware

 vmware

VMware was late to the cloud transition because they were profiting from server virtualization. This creates multiple operating systems on a single computer server. In 2013 they began to transition into the cloud and quickly started to distribute its own products. Like IBM, they went with the hybrid cloud idea so businesses could use private and public clouds depending on the level of security they want. They updated their vSpehere platform and the foundation on which its cloud service is built on in order to provide the hybrid cloud resources without change to any standard customs the customers use.

 

Workday

 workday

Workday builds and provides cloud applications for finance and human resource departments and is one of the fastest growing Software-as-a-Service (SaaS) cloud space and big data business. Their revenues have been consistently increasing by more than 50%.

The post Tech Companies Competing To Be The Leader In Cloud Computing appeared first on Cloud News Daily.

CIA claims its Amazon Web Services cloud is at ‘final operational capability’

(c)iStock.com/EdStock

It was one of the most fascinating battles of 2013: who would win the lucrative CIA cloud computing contract? Two horses were in the race, Amazon Web Services (AWS) and IBM; and it was the former who eventually came out on top despite appeals from the latter.

Now, according to CIA chief information officer Doug Wolfe, the AWS cloud has attained “final operational capability”.

As reported by Enterprise Tech, Wolfe told delegates at an industry event this week the CIA cloud would be “offset” on a private security network, and AWS had “made a big investment” in the project.

The AWS cloud will be unleashed across 17 US intelligence agencies according to the report, with Wolfe noting the CIA was “behind where [they] hoped to be” in terms of cloud adoption.

Wolfe had previously spoken at the Amazon Web Services government symposium in Washington back in June, where he said the AWS cloud would take “a few months to get online in a robust way.” In August, writing for Defense One, Frank Konkel reported the cloud was online.

It’s all a long way away from the argument and counter-argument when AWS and IBM were battling for the contract 18 months ago. AWS was given the decision, despite its proposal costing more than $50m a year than IBM’s.

There was a fair amount of mudslinging from both sides at the time. AWS said IBM had “belatedly” moved into cloud computing yet “does not even register on many leading commercial cloud computing analyses”, while IBM said that “unlike Amazon, IBM has a long history of delivering successful transformational projects like this for the US government.” The Government Accountability Office (GAO) released a report in which IBM’s complaint was both sustained and rejected, yet noting Amazon’s offer was both “the best value” and a “superior technical solution.”

IBM did lodge an appeal, in which it alleged the procedures used to rank Amazon’s proposal as technically superior were wide of the mark, but it fell on deaf ears in October 2013 when a federal judge ruled against the Armonk firm.

Wolfe defended the decision to award the contract to AWS, praising the vendor for delivering the cloud infrastructure and getting the project up and running in less than 18 months.

CIA claims its Amazon Web Services cloud is at ‘final operational capability’

(c)iStock.com/EdStock

It was one of the most fascinating battles of 2013: who would win the lucrative CIA cloud computing contract? Two horses were in the race, Amazon Web Services (AWS) and IBM; and it was the former who eventually came out on top despite appeals from the latter.

Now, according to CIA chief information officer Doug Wolfe, the AWS cloud has attained “final operational capability”.

As reported by Enterprise Tech, Wolfe told delegates at an industry event this week the CIA cloud would be “offset” on a private security network, and AWS had “made a big investment” in the project.

The AWS cloud will be unleashed across 17 US intelligence agencies according to the report, with Wolfe noting the CIA was “behind where [they] hoped to be” in terms of cloud adoption.

Wolfe had previously spoken at the Amazon Web Services government symposium in Washington back in June, where he said the AWS cloud would take “a few months to get online in a robust way.” In August, writing for Defense One, Frank Konkel reported the cloud was online.

It’s all a long way away from the argument and counter-argument when AWS and IBM were battling for the contract 18 months ago. AWS was given the decision, despite its proposal costing more than $50m a year than IBM’s.

There was a fair amount of mudslinging from both sides at the time. AWS said IBM had “belatedly” moved into cloud computing yet “does not even register on many leading commercial cloud computing analyses”, while IBM said that “unlike Amazon, IBM has a long history of delivering successful transformational projects like this for the US government.” The Government Accountability Office (GAO) released a report in which IBM’s complaint was both sustained and rejected, yet noting Amazon’s offer was both “the best value” and a “superior technical solution.”

IBM did lodge an appeal, in which it alleged the procedures used to rank Amazon’s proposal as technically superior were wide of the mark, but it fell on deaf ears in October 2013 when a federal judge ruled against the Armonk firm.

Wolfe defended the decision to award the contract to AWS, praising the vendor for delivering the cloud infrastructure and getting the project up and running in less than 18 months.

Faster still: Analysing big data analytics and the agile enterprise

(c)iStock.com/sndr

By Mark Davis, Distinguised Big Data Engineer, Dell Software Group, Santa Clara, California 

Big data technologies are increasingly considered an alternative to the data warehouse. Surveys of large corporations and organisations bear out the strong desire to incorporate big data management approaches as part of their competitive strategy.

But what is the value that these companies see? Faster decision making, more complete information, and greater agility in the face of competitive challenges. Traditional data warehousing involved complex steps to curate and schematise data combined with expensive storage and access technologies.  Complete plans worked through archiving, governance, visualization, master data management, OLAP cubes, and a range of different user expectations and project stakeholders. Trying to manage these projects through to success also required coping with rapidly changing technology options. The end result was often failure.

With the big data stack, some of these issues are pushed back or simplified. For example, the issue of schematizing and merging data sources need not be considered up front in many cases, but can be done on a more on-demand basis. The concept of schema-on-read is based on a widely seen usage pattern for data that emerged from agile web startups. Log files from web servers needed to be merged with relational stores to provide predictive value about user “journeys” through the website. The log files could be left at rest in cheap storage on commodity servers beefed up with software replication capabilities. Only when parts of the logs needed to be merged or certain timeframes of access analyzed, did the data get touched.

Distributing data processing on commodity hardware led to the obvious next step of moving parts of the data into memory or processing it as it streams through the system. This most recent evolution of the big data stack shares characteristics with high performance computing techniques that have increasingly ganged together processors across interconnect fabrics rather than used custom processors tied to large collections of RAM. The BDAS (Berkeley Data Analytics Stack) exemplifies this new world of analytical processing. BDAS is a combination of in-memory, distributed database technologies like Spark, streaming systems like Spark Streaming, a graph database that layers on top of Spark called GraphX, and machine learning components called MLBase. Together these tools sit on top of Hadoop that provides a resilient, replicated storage layer combined with resource management.

What can we expect in the future? Data warehousing purists have watched these developments with a combination of interest and some degree of skepticism. The latter is because the problems and solutions that they have perfected through the years are not fully baked in the big data community. It seemed a bit like amateur hour.

But that is changing rapidly. Security and governance, for instance, have been weak parts of the big data story, but there are now a range of security approaches that range from Kerberos protocols permeating the stack to integrated ReST APIs with authentication at the edges of the clustered resources. Governance is likewise improving with projects growing out of the interplay between open source contributors and enterprises that want to explore the tooling. We will continue to see a rich evolution of the big data world until it looks more and more like traditional data warehousing, but perhaps with a lower cost of entry and increased accessibility for developers and business decision makers.

About the author:

Mark Davis founded one of the first big data analytics startups, Kitenga, that was acquired by Dell Software Group in 2012, where he now serves as a Distinguished Engineer. Mark led Big Data efforts as part of the IEEE Cloud Computing Initiative and is on the executive committee of the Intercloud Testbed Executive Committee, as well as contributing to the IEEE Big Data Initiative.

IBM Launches Ad Campaigns on Cloud Computing Journal | @CloudExpo [#Cloud]

SYS-CON Media announced that IBM, which offers the world’s deepest portfolio of technologies and expertise that are transforming the future of work, has launched ad campaigns on SYS-CON’s numerous online magazines such as Cloud Computing Journal, Virtualization Journal, SOA World Magazine, and IoT Journal.
IBM’s campaigns focus on vendors in the technology marketplace, the future of testing, Big Data and analytics, and mobile platforms.

read more

FREE ‘Internet of Things’ Conference Registration at @ThingsExpo [#IoT]

Internet of @ThingsExpo announced today a limited time free “Expo Plus” registration option. On site registration price of $600 will be set to ‘free’ for delegates who register during this period. To take advantage of this opportunity, attendees can use the coupon code “IoTAugust” and secure their “@ThingsExpo Plus” registration to attend all keynotes, as well as limited number of technical sessions each day of the show, in addition to full access to the expo floor and the @ThingsExpo hackathon. Registration page is located at the @ThingsExpo site.

read more

Jason Bloomberg Joins @DevOpsSummit New York Faculty | @TheEbizWizard | [#DevOps]

The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures.

Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer.

Only in today’s DevOps world, continuous testing, integration, and deployment upend the time metric, the DevOps cadence reinvents project scope, and cost metrics expand past software development to the total cost of ownership of the end-to-end digital initiative.

read more

Jez Humble on High IT Performance Bt @Skytap | @DevOpsSummit [#DevOps]

It didn’t hit me until after a second viewing of Jez Humble’s recent webinar with Perforce that his ways to predict high IT performance aren’t just a neat party trick; they’re absolutely essential. The days of bugs making it into production, distrust between departments, and delayed fixes and releases are becoming a thing of the past, at least among “high performers” of the world.

And as the complexity of software grows year after year, being able to accurately predict success may be the only way to actually achieve it.

read more

Salesforce delivers another billion dollar quarter, $5bn in annual revenue, shares skyrocket

Picture credit: Salesforce

Cloudy software provider Salesforce has announced its latest financial results, with $5.37bn (£3.46bn) in total annual revenue and another billion dollar quarter.

The results were in line with Wall Street’s expectations, with earnings per share at $0.14 and a year on year growth of 26.1%.

It certainly seems a long way since 2009, when Salesforce’s first billion dollar annual figures arrived, and late 2013, when the first billion dollar quarter arrived. CEO Marc Benioff saw the latter, understandably, as a major achievement initially – it’s now almost de facto.

“Salesforce delivered yet another year of exceptional growth, with revenue, deferred revenue and operating cash flow all growing more than 30%,” Benioff said in a statement. “Salesforce reached $5 billion in annual revenue faster than any other enterprise software company and now it’s our goal to be the fastest to reach $10 billion.”

Gross profit for Q414 ended at $1.09bn, up from $871,000 this time last year, while annual gross profit stood at $4.23bn, an increase of 25% from 2013’s $3.39bn.

Shares of Salesforce shot up as much as 10% in the aftermath of the news, yet the analysts were keeping their powder relatively dry. Tim Beyers, of the Motley Fool, said the company’s deferred revenue figures looked good – understandably given their main selling point is subscription based – yet added if balance grew more slowly compared to deferred revenue in the future, it “could suggest the company is having a tougher time signing the sorts of lucrative, multi-year deals Benioff wants.”

Kara Ordway, senior market dealer at City Index Australia, told CloudTech the results were “no great surprise” yet added the market was “pleasantly surprised” by Salesforce’s hike in its revenue outlook range.

“Going forward Salesforce looks well positioned to take advantage of one of the fastest growing markets in technology and is set to reap the benefits of its expansion into the fast growing European markets,” she said. “However Salesforce is yet to fully explore the advantages of geographic revenue diversity which is where the opportunity sits for the future.”

Ordway added: “Salesforce performed in line with expectations, however those forecasts were already well above the previous year’s results. With such an upbeat outlook, investors were particularly keen to jump on board.”

2014 highlights for Salesforce included the launch of the Salesforce1 app, as well as the opening of a first UK data centre, with further European expansion on the horizon. The company has also announced a global agreement with Sage with employees of the business management provider using Salesforce’s Customer Success Platform, as well as an update to its Desk.com product, which is now available in more than 50 languages.