According to the results of a new global study, commissioned by CA Technologies, 72% of organizations have implemented some aspect of DevOps but when the study took a closer look at the numbers, only 20% of DevOps users have put all the parts in place to reap the full benefits of DevOps.
Unlike many IT-related concepts, DevOps doesn’t revolve around a specific type of technology, and it can’t be classed as a methodology either. Indeed, DevOps generally requires blending a number of different technologies, skill-sets, tools and methods.
Archivo mensual: enero 2016
The Best Places to Buy iPhone and iPad Accessories
Remember when the iPhone was brand new and the biggest criticism competitors hurled at Apple was that it was too hard to personalize? Boy, did that one backfire. As the iPhone (and soon after, iPad) gained in popularity, countless stores, shops, and independent artists started producing endless accessories for your new digital BFF. With that […]
The post The Best Places to Buy iPhone and iPad Accessories appeared first on Parallels Blog.
IoT and Hello Barbie | @ThingsExpo #IoT #M2M #API #InternetOfThings
Hello Barbie™! is an IoT-enabled (Internet of Things) Barbie Doll with blonde hair, blue eyes and a built-in surveillance system. She’s not the first of her kind (and she won’t be the last), but here’s what you should know about bringing it, or any connected device, into your home.
Everything that connects to the public Internet is vulnerable. Encryption does not solve the problem. While it is true that you need about 6.4 billion years to crack a 2048-bit PGP encrypted file, I can probably socially engineer you out of your encryption key by attaching a little piece of malware to an email that offers you two discounted Super Bowl tickets and a deal on a hotel.
How manufacturing leaders are falling for the public cloud
(c)iStock.com/stockvisual
85% of line of business (LOB) decision makers in the manufacturing industry are using at least one form of public cloud service, according to a new research study.
The report, released jointly by EMC, VCE, and VMware – all now part of Dell in some capacity after the whopping $67 billion deal for the former in October – polled more than 600 decision makers overall across six industries, with one sixth each on telecoms, finance, retail, public sector, oil and gas, and manufacturing. Yet it was the latter which provided the most interesting results.
Cutting costs (33%) and driving efficiencies (29%) are the primary use for public cloud services, according to the respondents, with the majority of line of business employees surveyed (87%) saying they consult IT for cloud deployments.
Despite this, however, security worries remain. Security exploits, cited by 50% of respondents, was the most worrying aspect for line of business leaders regarding cloud deployments. Reputational cost to the business (43%) and internal data loss (36%) were also seen as important.
“Manufacturers are united in their appreciation and use of public cloud services, and understandably so – it can offer the agility and flexibility that many LOBs in the industry need to keep up with rapidly changing market demands,” said Rob Lamb, EMC UK and Ireland cloud business director.
“For manufacturing IT departments to be more heavily involved in LOB IT decisions, they need to embrace a cloud strategy that allows others to continue cutting costs and drive efficiencies while mitigating security and data loss concerns,” he added.
According to the latest rankings from research firm IDC, EMC sits in fourth position in global cloud infrastructure vendors, behind HP, new custodians Dell, and Cisco respectively.
Is there a DevOps skills gap?
(c)iStock.com/George Clerk
I have been seeing job ads more and more frequently that heavily emphasise the requirement for ‘current skill sets’. The fact that these ads are putting such emphasis on the importance of skill sets being current signals to me a trend within the DevOps landscape: the ever-widening gulf between technological innovation and those who have exposure to it.
There are two core reasons why this problem exists and is growing. Firstly, maintaining employees with ‘cutting edge’ skill sets for cloud-based development requires either the IT budget of a large corporation or a team small enough to maintain an environment of rapid prototyping and experimentation. Needless to say, these extremes limit the number of candidates with the required abilities.
The traditional rules for filling tech vacancies do not apply equally in the world of DevOps
Secondly, if tech workers spend time out of work for whatever reason, their skills rapidly become out of date. A period of time without work is punishing for anyone searching for a new job, but this has a particularly detrimental effect on technical people working in areas such as DevOps with formal skills requirements that are changing all the time.
A technical staffer does not even have to spend a few months out of work to be slapped with the ‘out of date skills’ label. Traditional career progression pathways mean that the more seniority a person gains in the workplace, the less likely they are to be working on the ‘coal face’ of the problem.
As a result, technical managers lose their tech chops and when applying to other positions often face the prospect of starting again as a junior engineer to regain their technical credentials. This problem is compounded by hiring managers often opting to reject more seasoned “over-qualified” candidates in favour of new-to-market and cheaper hires with more current skillsets. Within DevOps these problems are more apparent because the technical skills required for the job are evolving at breakneck pace.
This is important background information when we consider the problems faced by startups when it comes to hiring technical talent. It is very difficult for non-technical people to go about hiring a technical employee, and this can be exacerbated by overemphasising specific certifications in narrow areas. This will dramatically – and unnecessarily – limit your candidate pool, and very often candidates have equivalent experience that only an expert would be able to identify. For example, someone with strong experience with Ubuntu Linux will likely not have difficulty with Red Hat Enterprise Linux after a brief acquaintance period.
Startups often hire people who have only ever had experience working with AWS or other cloud providers. As a result, the world beyond the virtual machine is a total mystery to them. For example, while they may have the specific skills, qualifications and certifications you’re looking for, they have little experience of organising cables and racks of equipment.
Certifications are all well and good, but in my opinion they should only take an application to a certain point. There is no substitute for genuine work experience: while certifications give a candidate credibility within a narrow area, ‘on the job’ experience is a far better indicator of future job performance. In my time as a manager some of the most gifted engineers I’ve worked with are those who started out as hackers and rogues, not ivory tower academics. In many areas of tech, it is far better to learn by doing, and sometimes doing it wrong, than it is to learn in the abstract.
In many areas of tech, it is far better to learn by doing – and sometimes doing it wrong – than it is to learn in the abstract
This divide is something that will become more and more apparent as companies shoulder the burden of ever-increasing layers of legacy systems: what skills are best for building, and what skills are best for maintaining. In my view, these same hackers and rogues, those who learn by doing, are far better suited to rapid prototyping, to building new things and coming up with new ideas. By contrast, employees with more formalised learning and specific codified experience also have significant value, however they’re better placed in roles requiring them to maintain existing things.
The key takeaway from all this is that the traditional rules for filling tech vacancies do not apply equally in the world of DevOps. A candidate being a few steps behind the curve should not disqualify them from the job, and by the same token a candidate who has the perfect blend of qualifications for the job should not be given a free pass. In today’s rapidly moving environment, having precise qualifications for self-contained skillsets is not as important as having a flexible, adaptable and cross-functional approach. Before you begin searching for that ideal candidate, make sure you know what you’re actually looking for.
Keep Your Data Active By @GorillaFlash | @CloudExpo @HGSTStorage #Cloud #BigData
We all know that data growth is exploding and storage budgets are shrinking.
Instead of showing you charts on how much data there is, in his session at 18th Cloud Expo, Scott Cleland, Senior Director of Product Marketing for HGST, will show how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources. See how HGST has used these solutions to gain more value out of the information we have – and capitalize on that value by delivering better products.
Tackling the resource gap in the transition to hybrid IT
Is hybrid IT inevitable? That’s a question we ask customers a lot. From our discussions with CIOs and CEOs there is one overriding response and that is the need for change. It is very clear that across all sectors, CEOs are challenging their IT departments to innovate – to come up with something different.
Established companies are seeing new threats coming into the market. These new players are lean, hungry and driving innovation through their use of IT solutions. Our view is that more than 70 percent of all CEOs are putting a much bigger ask on their IT departments than they did a few years ago.
There has never been so much focus on the CIO or IT departmental manager from a strategic standpoint. IT directors need to demonstrate how they can drive more uptime, improve the customer experience, or enhance the e-commerce proposition for instance, in a bid to win new business. For them, it is time to step up to the plate. But in reality there’s little or no increase in budget to accommodate these new demands.
We call the difference between what the IT department is being asked to do, and what it is able to do, the resources gap. Seemingly, with the rate of change in the IT landscape increasing, the demands on CIO’s by the business increasing and with little or no increase in IT budgets from one year to the next, that gap is only going to get wider.
But by changing their way of working, companies can free up additional resources to go and find their innovative zeal and get closer to meeting their business’ demands. Embracing Hybrid IT as their infrastructure strategy can extend the range of resources available to companies and their ability to meet business demands almost overnight.
Innovate your way to growth
A Hybrid IT environment provides a combination of its existing on-premise resources with public and private cloud offerings from a third party hosting company. Hybrid IT has the ability to provide the best of both worlds – sensitive data can still be retained in-house by the user company, whilst the cloud, either private or public, provides the resources and computing power that is needed to scale up (or down) when necessary.
Traditionally, 80 percent of an IT department’s budget is spent just ‘keeping the lights on’. That means using IT to keep servers working, powering desktop PCs, backing up work and general maintenance etc.
But with the CEO now raising the bar, more innovation in the cloud is required. Companies need to keep their operation running but reapportion the budget so they can become more agile, adaptable and versatile to keep up with today’s modern business needs.
This is where Hybrid IT comes in. Companies can mix and match their needs to any type of solution. That can be their existing in-house capability, or they can share the resources and expertise of a managed services provider. The cloud can be private – servers that are the exclusive preserve of one company – or public, sharing utilities with a number of other companies.
Costs are kept to a minimum because the company only pays for what they use. They can own the computing power, but not the hardware. Crucially, it can be switched on or off according to needs. So, if there is a peak in demand, a busy time of year, a last minute rush, they can turn on this resource to match the demand. And off again.
This is the journey to the Hybrid cloud and the birth of the agile, innovative market-focused company.
Meeting the market needs
Moving to hybrid IT is a journey. Choosing the right partner to make that journey with is crucial to the success of the business. In the past, businesses could get away with a rigid customer / supplier relationship with their service provider. Now, there needs to be a much greater emphasis on creating a partnership so that the managed services provider can really get to understand the business. Only by truly getting under the skin of a business can the layers be peeled back to reveal a solution to the underlying problem.
The relationship between customer and managed service provider is now also much more strategic and contextual. The end users are looking for outcomes, not just equipment to plug a gap.
As an example, take an airline company operating in a highly competitive environment. They view themselves as being not in the people transportation sector, but as a retailer providing a full shopping service (with a trip across the Atlantic thrown in). They want to use cloud services to take their customer on a digital experience, so the minute a customer buys a ticket is when the journey starts.
When the passenger arrives at the airport, they need to check in, choose the seats they want, do the bag drop and clear security all using on-line booking systems. Once in the lounge, they’ll access the Wi-Fi system, check their Hotmail, browse Facebook, start sharing pictures etc. They may also choose last minute adjustments to their journey like changing their booking or choosing to sit in a different part of the aircraft.
Merely saying “we’re going to do this using the cloud” is likely to lead to the project misfiring. As a good partner the service provider should have the experience of building and running traditional infrastructure environments and new based on innovative cloud solutions so that they can bring ‘real world’ transformation experience to the partnership. Importantly they must also have the confidence to demonstrate digital leadership and understand of the business and its strategy to add real value to that customer as it undertakes the journey of digital transformation.
Costs can certainly be rationalised along the way. Ultimately with a hybrid system you only pay for what you use. At the end of the day, the peak periods will cost the same, or less, than the off-peak operating expenses. So, with added security, compute power, speed, cost efficiencies and ‘value-added’ services, hybrid IT can provide the agility businesses need.
With these solutions, companies have no need to ‘mind the gap’ between the resources they need and the budget they have. Hybrid IT has the ability to bridge that gap and ensure businesses operate with the agility and speed they need to meet the needs of the competitive modern world.
Written by Jonathan Barrett, Vice President of Sales, CenturyLink, EMEA
Qualcomm and Guizhou to make new server chipsets in China
San Diego based chip maker Qualcomm and China’s Guizhou Huaxintong Semi-Conductor company have announced a joint venture to develop new server chip sets designed for the Chinese market.
The news comes only a week after chip maker AMD announced its new Opteron A1100 System-on-Chip (SoC) for ARM-based systems in data centre. Both partnerships reflect how server design for data centres is evolving to suit the cloud industry.
The Qualcomm partnership, announced on its web site, was formalised at China National Convention Center in Beijing as officials from both companies and the People’s Government of Guizhou Province signed a strategic cooperation agreement. The $280 million joint venture will be 55% owned by the Guizhou provincial government’s investment arm, while 45% will belong to Qualcomm subsidiary.
The plan is to develop advanced server chipsets in China, which is now the world’s second largest market for server technology sales.
The action is an important step for Qualcomm as it looks to deepen its level of cooperation and investment in China, said Qualcomm president Derek Aberle. In February 2015 BCN sister publication Telecoms.com reported how the chip giant had fallen foul of the Chinese authorities for violating China’s trading laws. It was fined 6 billion yuan (around $1 billion) after its marketing strategy was judged to be against the nation’s anti-monopoly law.
“The strategic cooperation with Guizhou represents a significant increase in our collaboration in China,” said Aberle. Qualcomm is to provide investment capital, license its server technology to the joint venture, help with research and development and provide implementation expertise. “This underscores our commitment as a strategic partner in China,” said Aberle.
Last week, AMD claimed the launch of its new Opteron A1100 SoC will catalyse a much more rapid development process for creating servers suited to hosting cloud computing in data centres.
AMD’s partner in chip development for servers, ARM, is better placed to create processors for the cloud market as it specialises in catering for a wider diversity of needs. Whereas Intel makes its own silicon and can only hope to ship 30 custom versions of its latest Xeon processor to large customers like Ebay or Amazon, ARM can licenses its designs to 300 third-party silicon vendors, each developing their own use case for different clients and variants of server workloads, it claimed.
“The ecosystem for ARM in the data centre is approaching an inflection point and the addition of AMD’s high-performance processor is another strong step forward for customers looking for a data centre-class ARM solution,” said Scott Aylor, AMD’s general manager of Enterprise Solutions.
Infor invests $25 million in cloud-based retail analyst Predictix
Cloud app provider Infor has invested $25 million in data scientist Predictix with a brief to use its powers of analysis to solve the problems troubling modern retailers.
Infor will become a reseller of Predictix, incorporating its applications in the Infor cloud and its CloudSuite offering aimed at the Retail sector. Atlanta-based Predictix had 40% growth in SaaS subscriptions in 2015, has five of the top 15 global retailers as customers and manages $60bn in weekly forecasts. Its technology platform LogicBlox, which underpins all Predictix applications, has attracted funding from DARPA, the US Defense Advanced Research Projects Agency. The Infor cloud has 40 million users.
The addition of Predictix to CloudSuite Retail brings modules that support demand forecasting, merchandise financial planning, assortment planning, category management, network flow optimization and optimisation of allocations and markdowns.
Cloud based demand forecasting could be 50% more accurate for retailers, claims Predictix, since it is more elastic and can apply self-learning algorithms to unlimited amounts of data. The improvements in forecasts leads to higher sales and greater profitability for retailers, it claims. Merchandise financial planning, meanwhile, increases the retailer’s powers of diversifying and creates more revenue out of existing product lines, according to Predictix. Assortment planning and category management, on the other hand, can create up to $20 million in annual benefit for every one billion dollars in sales.
Network flow optimisation, meanwhile, saves retailers money by modelling their entire supply chain network, analysing the wastages and offering new more efficient alternatives. The allocation and markdown optimisation services promise to give retailers up to 20% greater revenues and profit. Predictix is suitable to all types of global retail including online, brick-and-mortar, social, mobile, fashion, hardlines, mass-merchant and grocery, the vendor says.
AWS, Azure and Google intensify cloud price war
As price competition intensifies among the top three cloud service providers, one analyst has warned that cloud buyers should not get drawn into a race to the bottom.
Following price cuts by AWS and Google, last week Microsoft lowered the price bar further with cuts to its Azure service. Though smaller players will struggle to compete on costs, the cloud service is a long way from an oligopoly, according to Quocirca analyst Clive Longbottom.
Amazon Web Services began the bidding in early January as chief technology evangelist Jeff Barr announced the company’s 51st cloud price cut on his official AWS blog.
In January 8th Google’s Julia Ferraioli argued via a blog post that Google is now a cheaper offering (in terms of cost effectiveness) as a result of its discounting scheme. “Google is anywhere from 15 to 41% less expensive than AWS for compute resources,” said Ferraioli. The key to the latest Google lead in cost effectiveness is automatic sustained usage discounts and custom machine types that AWS can’t match, claimed Ferraioli.
Last week Microsoft’s Cloud Platform product marketing director Nicole Herskowitz announced the latest round of price competition in a company blog post announcing a 17% cut off the prices of its Dv2 Virtual Machines.
Herskowitz claimed that Microsoft offers better price performance because, unlike AWS EC2, its Azure’s Dv2 instances have include load balancing and auto-scaling built-in at no extra charge.
Microsoft is also aiming to change the perception of AWS’s superiority as an infrastructure service provider. “Azure customers are using the rich set of services spanning IaaS and PaaS,” wrote Herskowitz, “today, more than half of Azure IaaS customers are benefiting by adopting higher level PaaS services.”
Price is not everything in this market warned Quocirca analyst Longbottom, an equally important side of any cloud deal is overall value. “Even though AWS, Microsoft and Google all offer high availability and there is little doubting their professionalism in putting the stack together, it doesn’t mean that these are the right platform for all workloads. They have all had downtime that shouldn’t have happened,” said Longbottom.
The level of risk the provider is willing to protect the customer from and the business and technical help they provide are still deal breakers, Longbottom said. “If you need more support, then it may well be that something like IBM SoftLayer is a better bet. If you want pre-prepared software as a service, then you need to look elsewhere. So it’s still horses for courses and these three are not the only horses in town.”