Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations.
In her Day 2 Keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, discussed how IT organizations can automate just-in-time assembly of application environments – each built for a specific purpose with the right infrastructure, components, service, data and tools – and deliver this automation to developers as a self-service. Victoria’s keynote will include remarks by Kira Makagon, EVP of Innovation at RingCentral, and Ratnakar Lavu, EVP of Digital Technology at Kohl’s.
Archivo mensual: enero 2015
Network Security Trends to Be Aware of By @RenePaap | @CloudExpo [#Cloud]
With every New Year it is time to look back at the industry events of the past 12 months, and use our expertise to predict what lies ahead, in order to be more prepared. With regards to DDoS attacks, here is a short list of what to expect in 2015.
We expect to see an increase in DDoS attacks size in terms of bandwidth, but likely there will be an emphasis on packets per second metric. Bandwidth is an important metric, and it is something most people can relate to. When shopping for home Internet connection, the bandwidth metric is what people are comparing.
Generating Quality with @CloudTest CEO @Lounibos | @DevOpsSummit [#DevOps]
«SOASTA built the concept of cloud testing in 2008. It’s grown from rather meager beginnings to where now we are provisioning hundreds of thousands of servers on a daily basis on behalf of customers around the world to test their applications,» explained Tom Lounibos, CEO of SOASTA, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Connecting Applications to the Cloud with @ZenteraSystems | @CloudExpo [#Cloud]
“The year of the cloud – we have no idea when it’s really happening but we think it’s happening now. For those technology providers like Zentera that are helping enterprises move to the cloud – it’s been fun to watch,» noted Mike Loftus, VP Product Management and Marketing at Zentera Systems, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Campaign for Clear Licensing has IBM and SAP in its sights after Oracle assessment
(c)iStock.com/izusek
The Campaign for Clear Licensing (CCL) has called for feedback on IBM and SAP’s software licensing practices, after an open letter was sent to Oracle addressing its concerns.
CloudTech broke the news that IBM and SAP were the next two vendors in the firing line earlier this month. The CCL hopes to present findings and recommendations at a roundtable on February 2.
Potential issues include complexity brought about by changing terms and conditions, forced product bundling, as well as a lack of clarification over licensing terminology and metrics, according to the CCL.
“After Oracle, our members overwhelmingly called out IBM and SAP as the next most prolific auditors and vendors commonly associated with testing vendor-customer relationships,” said Mark Flynn, CCL chief executive. “The issues raised by customers of IBM and SAP are not too dissimilar to those raised with other vendors.
“We will be looking into these issues and any others that our members highlight in the coming months, and hope that by exposing bad practice we can continue to work towards a more harmonious software licensing environment for vendors and customers alike,” he added.
The CCL released its report on Oracle in November, finding the majority of customers have an “arms length, impoverished” relationship with the tech giant. Among the recommendations were to ensure there is only one corporate voice, to invest in a well organised knowledge base, and to provide better business communications.
CCL chairman Martin Thompson insisted that the spotlight had not moved away from Oracle, saying the not-for-profit will continue to work with the Redwood firm as well as scrutinising IBM and SAP.
IBM and SAP both released financial results earlier this month, with both companies looking to shift its core revenue streams away from traditional software and into cloud. IBM hit $7bn of cloud revenue in 2014 but saw net income fall 7% year on year, while SAP’s cloud subscriptions and support went up 45% but at the same time shifting its 2017 operating profit goals to the right.
Anybody looking to send feedback should look here for IBM and here for SAP. CloudTech has contacted IBM and SAP and will update this story in due course.
Combining the physical with the digital: Joining the data self-preservation society
(c)iStock.com/4X-image
By Simon Taylor, Chairman, Next Generation Data
As businesses continue to recognise the strategic importance of IT and data to the very existence of their businesses let alone performance, we can be under no illusion about the absolute necessity of keeping data safe and being alert to all the associated potential risks posed – from the inherent ‘fragility’ of web and cloud infrastructure, to things altogether more sinister such as cyber or even physical terror attack.
Whether your data is on your premises, stored in a colo data centre, in the cloud or otherwise, a comprehensive preventative data loss management and security strategy is essential. This means knowing exactly where and how your data is used, stored and secured, as well as being totally satisfied your organisation or your service provider has the ability to recover seamlessly from disasters we all hope will never happen.
Data loss prevention (DLP) strategies and software solutions are essential for making sure that users do not send sensitive or critical information outside the corporate network. IT administrators can then control what data can and cannot be transferred by monitoring, detecting and blocking sensitive data while in-use, in transit or while archived. The latter is very important as sensitive and valuable data that’s stored is often especially vulnerable to outside attack.
But effective protection from data loss and ensuring its security cannot only be limited to data loss management and monitoring activities, or the implementation of back-up, firewall, intrusion detection and anti-malware software – as is all too often the case.
Getting physical
There are equally critical, often overlooked, physical factors to consider for ensuring your data security and business continuity. Factors such as supply of reliable and stable power, diversity of fibre network options and sufficient cooling/environmental services all need to be carefully considered, along with perhaps mirroring of data on servers in remote ‘second site’ physical locations.
Over the past twenty years or so larger businesses have typically addressed all or some of these issues by building their own data centres close to their office premises to house their mission critical servers and storage equipment. But this approach has had its own problems, not least the considerable capital expenditure involved in construction and the headache of keeping up to date with latest hardware and software developments.
With this in mind many businesses are increasingly outsourcing their IT operations and data storage to modern specialist ‘colocation’ data centre operators. These can provide customers with space, power, and infrastructure to more securely house and manage their own IT operations, or alternatively manage these for them.
While most certainly cloud providers can also offer business users many benefits in terms of pay as you go data storage and access to the very latest business applications, these services still depend on the reliability and security of servers in data centres somewhere. It is therefore prudent to find out from your cloud provider which data centres they are using and where they are located, and have them report on the data management and security credentials and procedures in place.
It is also highly advisable to establish with them what happens to accessing your data in the case of a third party going into administration. Having a legal escrow agreement in place at the outset of the relationship will help ensure you can retrieve your data from their premises more easily. Without the above assurances storing mission critical data in the cloud can be risky.
Taking an integrated and holistic approach to data loss prevention and security will ensure both its security AND its continuous availability. But for maximum peace of mind that your data is always available, safe and where you expect to it be, also requires physical security to be given as much consideration as the digital aspects.
Data loss prevention considerations
1. Security: Physical security measures are often overlooked in favour of the digital variety but can often be prove to be the weakest links of all.
How physically secure is your building and IT equipment? Consider how its location may impact your business continuity and data availability – being well away from areas susceptible to flooding, large urban areas and flight paths reduces exposure to the potential risks
2. Resilience: Are sufficient data back-up and replication fail-safe measures in place along with Uninterruptable Power Systems (UPS) to mitigate unplanned downtime?
Has your data centre or computer room got access to abundant and redundant resilient power, and diverse fibre connectivity links? Are servers being sufficiently cooled and energy optimised to ensure maximum availability?
3. Service provider credentials: If outsourcing data directly to a colo data centre or via a cloud provider, check all of the above.
Also their security and operational industry accreditations for actual proof (ISO, PCI DCI, SSAE16 etc.) and the calibre of on-site engineering personnel for handling technical support issues and Disaster Recovery situations. Tier 3 category data centres should be used as a minimum. Putting in place an escrow agreement will also ensure you have legal access to retrieving your data in the event of their going into administration.
Vietnam Shines In Our IT Research
There’s something extraordinary going on in Vietnam, and I’m not sure everyone sees it. The country blazes from the dry pages of our research printouts, its incandescence obscuring its neighbors and making our office fire alarms nervous.
Among the 105 nations we now survey, Vietnam will finish in or near the Top 20 in the world in our overall ranking, when we announce our latest results next month. It will be near the top in Asia. Our overall ranking integrates several socio-economic and technological factors.
Additionally, Vietnam will rank near the top of the world in our pure tecnology development ranking.
In contrast, Vietnam continues to lag in the traditional economic development rankings that I’ve read. The United Nations’ Human Development Index places it 121st among 187 nations, tied with Guyana and trailing even Syria and Iraq.
It fares a little better in the World Economic Forum’s Global Competitive Index, finishing in a tepid tie for 65th among 144 nations, in the neighborhood of Peru, Colombia, Slovenia, and India. Another ranking, the Asia Cloud Computing Association’s Cloud Readiness Index, places Vietnam dead last among 14 nations surveyed.
Damned Lies & Statistics
The country’s mediocre to poor rankings in these surveys and others is no doubt strongly tied to less than $2,000 in per-person income, ranking around the bottom quartile of world incomes. Compare this amount to about $2,800 in the neighboring Philippines, $5,600 in Thailand, and $7,000 in China.
Yet this statistic, as with all single statistics, doesn’t tell the whole story. Our research takes the view that relative developmnt is the key; that is, how well is a country doing given its current economic resources. How strong are its underlying IT infrastructure and overarching societal development with respect to its overall wealth? And how dynamic is its environment? How quickly is its pace of change increasing?
By these measures, Vietnam is a star. Its global buzz has diminished recently, as years of rapid development following enactment of doi moi (renovation or innovation) reform policies in 1986 led to uneven development and societal stress.
Indeed, our research also shows Vietnam running “too hot” in our Goldilocks Index of pure technology development. How long can it sustain its current pace?
(More later…)
Containers: Don’t Skeu Them Up, Man
Gordon Haff wrote an insightful column today about “skeuomorphic virtualizaton,” in other words the notion of thinking of virtual servers as having the same look and feel of the physical machines lying underneath. The upshot is that virtualization ends up just being a form of partioning; users view virtualized instances as no different from a single physical server.
Gordon urges the end of skeuomorphism’s reign of terror (my words, not his) now that containers are (back) in the news.
So don’t simply stuff “a hodepodge,” as Gordon says, of operating systems versions along with your app and all that goes with it (web server, database, storage, comms) into a container as if it’s just another virtualized machine.
Instead, he writes that it’s time to think of “containerized operating systems, container packaging systems, container orchestration like Kubernetes, DevOps practices, microservices architectures, “cattle” (shorter term) workloads, software-defined everything, and pervasive open source as part of a new platform for cloud apps.”
Reboot Time
Old habits die hard. The relentless drive for efficiency in business focuses itself evermore on IT operations, breeding an instinct to view the new tool as simply an improved hammer and to simply try to slam things through with it more quickly. An increasing focus on DevOps, as seen from the high management level, can certainly reinforce this view.
But that would be wrong. Containers give everyone another shot at imagining their software as a number of services rather than a unified application. Further, as Gordon notes, they let you throw in an OS here, some orchestration there, a VPN there, and truly become efficient (and more future-proof) in the way you design and deploy what you’re trying to do.
The original SOA days seem far away to me, but the idea of provisioning services is still a radical one. Too often today, software-as-a-service in the cloud still means software-as-an-application from the sky. The inexorability of Moore’s Law allows people to slam things through for short-term gain rather than long-term efficiency, and this will be particularly true as containers become prized for their relative abstemious use of resources.
But an expeditious, rather than more profound, use of containers will just create tomorrow’s hodgepodge. The potential of containers to reboot the way we think of services and how they should be delivered can logically lead to re-imagining how users should experience what the services offer – and how skeuomorphic those experiences should be, of course.
OSS Development for the Modern Data Center By @John Savageau | @CloudExpo [#Cloud]
Modern Data Centers are very complex environments. Data center operators must have visibility into a wide range of integrated data bases, applications, and performance indicators to effectively understand and manage their operations and activities. Unfortunately, in many cases, the above systems are either done manually, have no standards, and had no automation or integration interconnecting individual back office components. This also includes many communication companies and telecommunications carriers which previously either adhered, or claimed to adhere to Bellcore data and operations standards.
In some cases, the lack of integration is due to many mergers and acquisitions of companies which have unique, or non standard back office systems. The result is difficulty in cross provisioning, billing, integrated customer management systems, and accounting – the day to day operations of a data center.
Have It Your Way By @MJannery | @CloudExpo [#Cloud]
It’s always been my philosophy that we should make it as easy as possible for our prospects to write us checks. Sounds flippant, but I believe it is sound business philosophy.
It’s been interesting to me to see how customers have wanted to buy infrastructure software, and how it has changed over the years.
Back in the ’90s and early 2000’s, most network management vendors charged for their software per interface or per port, and we did as well. Around 2004 and 2005, we started receiving requests for device-based pricing, since it was difficult for end users to know just how many interfaces they had (but they knew how many devices were in inventory). So we acquiesced, and converted to device based pricing. To do this, you need to make assumptions about the average number of ports per networking device (or interfaces per Bladeserver, etc.).