SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 200 developers, designers, quality assurance engineers, project managers in house, specializing in the world-class mobile and web development.
Monthly Archives: August 2017
Announcing @DasherTech to Exhibit at @CloudExpo | #AI #DX #Serverless #DataCenter
SYS-CON Events announced today that Dasher Technologies will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Dasher Technologies, Inc. ® is a premier IT solution provider that delivers expert technical resources along with trusted account executives to architect and deliver complete IT solutions and services to help our clients execute their goals, plans and objectives.
Since 1999, we’ve helped public, private and nonprofit organizations implement technology solutions that speed and simplify their operations. As one of the fastest growing IT solution providers in the country, we have gained a reputation for effortless implementations with relentless follow-through and enduring support.
Announcing @Ayehu_eyeshare to Exhibit at @CloudExpo Silicon Valley | #API #Cloud #Automation
SYS-CON Events announced today that Ayehu will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Ayehu provides IT Process Automation & Orchestration solutions for IT and Security professionals to identify and resolve critical incidents and enable rapid containment, eradication, and recovery from cyber security breaches. Ayehu provides customers greater control over IT infrastructure through automation. Ayehu solutions have been deployed by major enterprises worldwide, and currently, support thousands of IT processes across the globe. The company has offices in New York, California, and Israel.
Grape Up to Exhibit at @CloudExpo | #CloudNative #DevOps #PaaS #CloudFoundry #GrapeUp
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emerging startups to Fortune 1000 companies.
Five steps to gain data centre power chain management and risk assessment

It is a simple fact that data centre power outages will cause major business disruption in our connected and web delivered world. It’s very likely that a severe data centre power outage can cause a major loss of customers and damage the brand – as well as the stock price.
Everyone must face these facts, because there is no silver lining to a power loss incident. The potential ramifications might be irreversible and include that loss of trust, market share and reputation that are hard to quantify but definitely have a major, and lasting, impact.
It therefore follows that it is smart business sense to take a proactive approach to preventing power failure in the data centre. It even behoves the business heads to take an interest in the foundational technologies that underpin the success of the whole business. I’d encourage anyone in charge of data centre facilities or management to be proud and tell the business how the data centre is secured, complaint and risk-proofed if they are confident in their facilities and the services provided. It’s a great way of demonstrating that you’re on top of the job and polishing the personal halo. If no one knows what the data centre does and how it’s run, then they won’t appreciate the hard-working team that keep the ‘lights on’ and the whole business running.
So if you’re not proud of the power chain management at the moment, here are five steps to take to gain a better control of the situation and manage the risk currently facing the facilities.
Make sure that your physical IT infrastructure is mapped to your power chain – understand the flow
The first step is to discover what devices actually make up your power chain, their locations, respective dependencies and lifecycle status. It is important to know at all times the last time each asset was serviced and by whom.
Have a single pane-of-glass view across all data centres/data rooms – save time and effort on management
Most data centre managers should have some sort of monitoring system with a view into building management system (BMS) and such facility operations as heating, ventilation, and air conditioning (HVAC). It’s good to note that these data centre monitoring systems are often siloed in nature and keep all data locked within their respective databases.
To do a better, all-encompassing and timely job managers should get access to all data, in real-time via a consolidated portal that automatically gathers information from the following sources:
- All data centres
- Multiple BMS systems
- Mixed vendor/hardware (the IT)
- Facilities equipment
Gain the ability to run power failure simulations – test and understand your disaster processes
Power failure simulations are a great way to test the resilience of a power chain while identifying the impact of all down streaming devices affected by power loss. It’s the only way to be able to demonstrate to your users and the business at large that the data centre management team are on top of their game when instances like BA’s outages show that not everyone else is.
Ensure that all incidents are captured within ITSM service desk – then use trend analysis to identify further potential risks of failure before they happen
It’s important to keep track of the small and large issues that impact operations so you can identify problematic patterns and avoid future disruptions. This takes full integration between the data centre operations, the IT service management (ITSM) service desk and facilities information to document problems and make impactful changes.
Following this, trend analysis is about looking back in order to look forward once more. By monitoring and documenting what data centre capacity is used, you can detect trends and patterns which will help in future capacity planning needs.
That will help managers make the case before issues become critical.
Ensure that the power chain is secure
Ask a key question: Does your IT security encompass network vulnerabilities and your power chain’s devices? Know if the power chain is part of IT security protocols, and therefore who has access to your control points. These are critical questions to address and can insulate the entire operation from possible breaches when the answers are known and entry/access is controlled.
If you need to work out what the best method is to assess the probability of power loss and mitigate the associated risks, then ask yourself the following questions:
- Do I have full transparency into all interconnected devices and systems?
- Am I monitoring my operations in real time?
- Have I documented the datacentre’s resiliency?
- Am I capable of running a stress test to determine the various risk levels associated with power loss?
- Can I identify the changing trends in my power system and respond accordingly?
- What is the overall vulnerability of my power chain?
Right away, if you do not have answers for all these questions, and finding them seems daunting, consider implementing a DCIM solution. DCIM solutions – data centre infrastructure management – are a proven means to address these concerns while enabling both facilities and IT personnel to participate in improving overall operations while lowering capital expenses.
Of course, real life is real life. There is no panacea for ensuring a 100% uptime and efficiency. However, there are methods to identify areas of improvement and prepare for service disruption. You owe it to your company and customers to be aware of the data centre management tools that help preserve services. And data centre managers, the unsung heroes of company success, owe it to themselves to have their value understood and respected appropriately.
Alibaba Keeps Growing
Looks like Alibaba is on a roller-coaster ride and there’s nothing to stop it, as is evident from the results of last quarter.
During the quarter that ended on June 30, Alibaba’s cloud business crossed more than one million customers the revenue during this period rose to 2.43 billion yuan, which roughly equals US$359 million.
According to the reports released by the company, it saw an increase of 137,000 paying customers. As a result, a good chunk of its revenue came from value-added services and it drove up the average revenue per user (ARPU) metric.
Among the high-paying clients, some of the notable ones are China’s CITIC Group, Huaneng Group and PICC Finance.
This good news is sure to motivate the company to take many more steps to widen its worldwide customer base. Already, it’s been opening many centers in different parts of the world, starting from Sydney in Australia to Dubai and even London to provide cloud services to a global clientele.
Besides cloud, many of the core business segments also saw a robust growth during the second quarter. In this period, Alibaba’s retail e-commerce business grew by a whopping 57 percent as it touched $5.4 billion in sales. A notable aspect is that the revenue per buyer as well as the customer base have increased, thereby setting the stage for increased growth over the next few quarters too.
Overall, Alibaba’s revenues increased a massive 56 percent year-on-year to reach revenue of around $7.4 billion. These numbers make Alibaba one of the largest companies in the world and it joins ranks with those of Microsoft, Google and Amazon.
Much of this success can be attributed to the fact that Alibaba and its subsidiaries dominate the Chinese digital market. So, how can dominance in one country make it into the elite $400-billion and up club?
Simply because China is the world’s single largest Internet market with more than 700 million customers. To give you a perspective, that roughly twice the entire population of the United States. To top it, any average Chinese spends more money online than Americans.
This surge in Internet users has happened in a controlled space where many American companies don’t have access and this is probably why Alibaba was able to make such rapid strides within a short period of time.
Though we can continue to debate about whether this protectionism is right or wrong, one thing that’s going to stay for sure is Alibaba’s astounding growth as it marches on to capture the world market.
The post Alibaba Keeps Growing appeared first on Cloud News Daily.
AT&T, GE and Oracle offer juiciest cloud salaries, new data reveals

Cloud computing skills continue to be in high demand – and new figures from PayScale reveal that AT&T, General Electric and Oracle provide the best remuneration for top performers.
The figures, first reported by Forbes, cover a variety of metrics, from employers, to different roles, to company size and years’ experience, with the data coming from more than 1000 US-based respondents in each case.
If you want to make the most money from your cloudy career, then enterprise IT architect, with a median salary of $138,051, just pips senior solutions architect, with $132,092, as the role with the best remuneration. Solutions architect ($122,593), IT architect ($120,811) and senior systems engineer ($106,170) also broke the six-figure barrier, compared with DevOps engineer ($97,135) and software engineer ($95,962).
When it came to specific companies, AT&T offers almost a quarter of a million dollars ($248,323) for their most experienced roles, with GE and Oracle the only others to offer more than $200k. Comparing against the salary data for the four hypervisor cloud infrastructure vendors, IBM came out with a top salary of almost $175k, with Microsoft ($166k) and Amazon ($164k) close behind and Google – albeit with less data to work from – at $115k.
Perhaps not surprisingly, larger organisations pay more, although the increments do not entirely match. Organisations with fewer than 600 employees will pay below $116k on average, however the salaries – based on more than 100 figures – do not see a noticeable pattern (2000-4999 employees, $124,059, 5000-19999, $123,569) until the largest category, enterprises with more than 50,000 employees, whose average salary is $129,291.
These figures may add colour to a UK study released earlier this month by IT resourcing provider Experis, who warned that while the number of cloud vacancies almost doubled – at a 97.73% increase – year on year, salaries for permanent roles only went up 2.7% on average.
The reasoning, Experis argued, was that as roles for companies maintaining, optimising and enhancing their existing cloud platforms proliferated, less specialised skills were needed for them, making them easier to fill and pay growth to stumble accordingly.
As a result, getting the best certifications is vital to forging a successful cloud career. Writing for this publication earlier this year, Alex Bennett, of IT training school Firebrand Training, put down six of the most sought-after specifications in the industry, from AWS, to Microsoft, as well as the (ISC)2 Certified Cloud Security Professional (CCSP) certification.
You can take a look at the full data here.
Effective Monitoring | @DevOpsSummit @Catchpoint #DevOps #APM #Monitoring
This is the second blog in a series of three in which I expand on some of the points raised in O’Reilly Media’s DevOps for Media & Entertainment report. The first post covered the two essential aspects of DevOps that are often overlooked: communication and empathy. Today, we dive into a more technical topic – monitoring. Monitoring is essential. It tells you if your service is up, down, fast, slow, and functioning as designed. When something inevitably breaks, a monitoring tool can notify you via alerts and help diagnose the problem.
[slides] Continuous Deployment for #Docker | @DevOpsSummit #AI #DevOps
Most companies are adopting or evaluating container technology – Docker in particular – to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed.
[slides] #WebRTC for #IoT Edge Computing | @ThingsExpo #DX #M2M #RTC
From 2013, NTT Communications has been providing cPaaS service, SkyWay. Its customer’s expectations for leveraging WebRTC technology are not only typical real-time communication use cases such as Web conference, remote education, but also IoT use cases such as remote camera monitoring, smart-glass, and robotic. Because of this, NTT Communications has numerous IoT business use-cases that its customers are developing on top of PaaS. WebRTC will lead IoT businesses to be more innovative and address current issues in the Edge Computing scenario.
In his session at @ThingsExpo, Kensaku Komatsu, a research and development engineer in Department of Technology Development at NTT Communications, talked about the practical experience, activity and potential of WebRTC for IoT and Edge Computing scenario.