Cloud: Mobility Driving Asian Startups

There are more mobile phones in the Philippines than people. And there are a lot of people.

This is one of the amazing statistics of our current era, in which the compulsion for humans to communicate is leading us into the realm of massive data flows in an increasingly interconnected world.

The Philippine phenomenon is due in part to the presence of two dominant mobile carriers–and a third that nips strongly at their heels—who charge extra fees for texts and calls outside of their networks. About 96% of this traffic is from prepaid traffic, in which users “load” up their phones from ubiquitous small stores in increments of less than one US dollar.

Similar noteworthy statistics are found elsewhere in Southeast Asia.

Indonesia, for example, was sending out 385 tweets per second in 2013, grabbing 7.5% of global Twitter traffic. Other strong social-media numbers have led to many there referring to their country as “the social media capital of the world.” With overall wired Internet access still lacking, more than 60% of Indonesia’s social traffic is mobile.

Thailand claims 97% of its population on social media, with prepaid SIM card system like the Philippines, and easy roaming throughout neighboring countries.

Malaysia has a higher average income than most of its neighbors, and Vietnam a lower one, but both also contribute to an Asian average of more than 360MB of data use per month on mobile devices.

Singapore is of course the great economic power of the region, with a per-person income level that now surpasses that of the United States. Singapore has among the fastest Internet access in the world and is moving toward being a Smart City through use of IoT technology.

The average mobile data use in Asia is more than three times that of North America, almost 20 times that of Europe, and 200 times that of Africa. As I noted above, amazing.

Heat, Noise, and Startups
The hyperkinetic nature of Southeast Asian nations—the traffic, noise, masses of people, and heat can easily overwhelm on a short-term basis and grind one down over the long term—is reflected in recent economic growth through the region. Our research at the Tau Institute shows the region to be the most dynamic in the world, even as clear infrastructure problems are apparent everywhere outside of Singapore.

This energy is also reflected in a growing culture of startups and innovation. I recently attended a startup competition in Manila, which is part of a larger event called the Top 100 program.

The Top 100 program culminates in Singapore June 23-24, where 100 companies from a field of 300 among 14 Asian nations will compeete for attention from investors. The competition goes beyond Southeast Asia, with teams from India, Bangladesh, Kazakhstan, Taiwan, Japan, and South Korea joining in the fun.

Much of the fun will be focused on mobile apps, because mobility is huge and apps are cool. Apps also seem to lack the barriers to entry of creating the next great piece of enterprise software. I would like to see more of an emphasis on frameworks and platforms, and a greater presence of all the open-source companies we see in the US.

I am also encouraging people to pursue innovation within their organizations. Even as I marvel at the energy and enthusiasm of the startup communities in Manila and elsewhere, the reality is that large governmental organizations and big companies are the primary employers throughout this massive region. Innovation need not be a stranger to them.

read more

New Updates For HP’s Big Data Platform Haven

HP has updated its big data platform Haven to include new analytics and predictive capabilities. This platform is geared towards enterprises with lots of data of various types, and the new update expands the type of data that can be analyzed through a new connector framework. A new Knowledge Graphing feature will be implemented along with better speech recognition and language identification features.

 

The Haven big data platform is made up of analytics, hardware and services with some of this available on-demand. HP’s big data platform was begun in 2013 with Haven being the umbrella for various technologies. The update brings together analytics for structured and unstructured data by combining context-aware unstructured data service analytics of HP IDOL with SQL-based capabilities of HP Vertica.

 

haven-dev-haven-gird-diagram_tcm_245_1529621

 

Examples of this type of service include Microsoft Exchange, SharePoint, Oracle, and SAP enterprise applications and cloud services such as Box, Salesforce and Google Drive.

 

The knowledge-graphing feature mentioned above could analyze connections in data, enabling advanced and contextually aware research within assorted data sources. The enhanced speech and language capabilities of the update are able to work with 20 languages. This part of Haven is powered by advanced deep neural technology and stems from thousands of hours of audio sampling via this neural network.

 

Other enhancements include targeted query response and IDOL search optimizer. The targeted query response helps customize and improve search results based on specific criteria. The IDOL search optimizer is used for understanding the types of searches being done by users and then gauging the quality of results.

 

The goal of HP’s Haven platform is to not have big companies relying on specialized data scientists or costly, complex integration projects in order to benefit from big data computing across almost any data type.

The post New Updates For HP’s Big Data Platform Haven appeared first on Cloud News Daily.

A four step guide to triple your savings on AWS deployments

(c)iStock.com/zakokor

The playground for cloud infrastructure providers is busy, as the most recent  Gartner report shows, and Amazon Web Services (AWS), though being challenged by Microsoft Azure, still look like the unattainable leader. What makes Amazon appear a cure-all solution and how many of its services de facto bring tangible benefits to cloud neophytes?

AWS comes with a stack of tempting benefits such as elastic capacity, scalability, speedy and agile delivery, focus on business and global reach. However, the CXO-level decision-makers will often be cautious, and with good reason, when it comes to migrating their apps to the cloud. Besides security concerns and prejudices surrounding the cloud infrastructure, there is another underlying reason: who wants to undergo the trouble of revamping their architecture with outcomes that are hard to calculate?

Amazon’s cloud has over 15 services, but you are not likely to need all of them. We have distilled our experience of how one of our clients has cut monthly expenses on the cloud by three times and come up with this guide.

Painless migration at 1-2-3

Before plunging head first into migration, you need to put the existing infrastructure and processes under a microscope to find out what you are actually going to migrate. This would help a lot with the financial assessment to compare the bottom line costs of running an in-house data centre or co-located facilities with the expenses on a cloud-based architecture. Preceded by a security, technical and functional assessment to decide which parts of infrastructure are more suited to the cloud and can be moved, you can go on to develop a detailed roadmap.

Surprisingly, most parts of the existing architecture can be re-implemented in the cloud as they are and only later modified, so do not view the legacy architecture as a roadblock but rather as a springboard on the way to cloud. NB: if you use third-party licensed products do check out their terms and conditions and how AWS policies work with them.

Once we started exploring the topic of Amazon’s cloud we expected some problems with this stage, but surprisingly it is possible to move live services without harmful downtime. The key to success is to act step by step.

Prepare

At the initial stage the task is to get familiar with the AWS capabilities and to recreate the same services in the cloud as on the dedicated servers. A tool like Amazon EC2 makes it quite easy to add a new server, to set up security and to calculate the costs. Already at this stage you will have a pretty accurate estimation of the cloud capacities and associated costs which are necessary to support your existing architecture. But this has been just the first step – the next two are much more risky.

Clone

The next big thing is to move the data from the old server to the cloud. You can start with creating a relational database in the cloud with the help of Amazon RDS which is a service that supports replication from outside sources. Thus the copy of your data will be hosted in the cloud while the end-users will continue to use the old system. The synchronisation of data between the two systems should be set up until the cloud infrastructure is fully tested and ready for finalising the migration.

Switch

At the final stage you want to re-route your clients to the cloud-based system. This is a critical task and it can be handled with the help of a highly available and scalable cloud DNS service called Amazon Route 53. Utilising the domain name system allows to create a transitional workflow according to which for some period the users will be accessing either the old or the new system while the data will continue to be copied from the dedicated server. The switching itself starts with setting a new IP for the domain name and the minimal time to live (TTL) at 60 seconds for your DNS records on this stage. Once the new system is secure and stable, the DNS will be switched will go almost unnoticed for the end-users.

Why is it all about scalability?

When speaking about the cloud, one benefit that is often hyped about is the flexibility which comes along once you can scale the capacity of your servers up and down depending on your current needs. What practical issue does it address?

To visualise the problem caused by system overload, imagine a busy petrol station, with only two pumps available to serve the clients. It means that only two cars can be refuelling at any given moment, while the others will be queueing. The line will be growing and once the gas station physical space is filled, new cars will not be even able to enter. To ease the situation and dissolve the queue it would be only logical to turn on more pumps. The same happens with the server which can process only a limited number of requests per minute and if the traffic to your site increases it needs extra capacity to handle it.

For example, before migrating to AWS one of our customers operated 17 dedicated database servers and 20 application servers and they had to maintain all of them at all times even if the load is lower than their capacity. On the other hand, at peak times it would take them days if not weeks to add extra capacity to meet the growing demand. That created a “lose-lose” situation when they had either pay for the idle time, or lose business through the disruptions in services availability when the system was overloaded.

Does Amazon Web Services help to solve this problem? Yes, thanks to the traffic balancing and horizontal scaling which involves vertical software optimisation and on-demand availability of server memory. There is a trade off here – to enjoy the advantages you have to invest certain efforts first. You will have to implement the  shared nothing architecturewhich means splitting the app into several parts, each being independent and self-sufficient, and then to scale separate parts of it, for example, to serve more clients.

Scalability is further ensured with the help of Elastic Load Balancer. This service automatically routes incoming application traffic across multiple instances and even across multiple physical Availability Zones in the cloud.

Manual scaling vs. auto-scaling

Maximising the use of the cloud resources through scaling might be a way to partially justify the migration efforts and rather high costs of extra services. Amazon attracts clients with the promise of auto-scaling which makes the maintenance as easy as pie. That’s how it works: the web service Amazon CloudWatch makes your cloud system visible through and through and gives an insight into how the resources are utilized, how the application performs and how healthy it is. This service collects and analyses information from log files and metrics and based on these data can predict the resource needs, such as, for example, request spikes.

Based on the data gathered by this service you can go on to set up auto-scaling which means that more servers or database capacity will be activated when most needed. Depending on the type of your business the spikes may vary according to day time, day of the week (e.g. weekend activity), day of the year (e.g. seasonal or holiday spikes), so scaling your service availability up just before the spike start or down – after its end – will help to serve more customers, save money by reducing server idle time and ensure that the service is stable and available at all times.

Understanding the peak scenarios, you can set up rules for automatic scaling. Put simply, such a rule may sound like this: if the traffic increases by 20%, a new server machine is added. For example, one of our clients normally receives 1000 requests per minute while during the Black Friday this skyrocketed within a short time hitting 4000 requests per minute. By activating more servers, the end-users did not suffer from slow service and the system operated normally.

Squeezing more value from AWS

Once you have the scalability functionality of the cloud up and running, it is time to take a breath and look at additional features from the AWS stack that can probably enrich your application and cut corners for better cost-efficiency. Our favourites are elastic search, databases and to some extent content delivery network.

Why bother with development efforts on elastic search?

The search functionality for applications and websites hosted in the cloud can be powered by Amazon CloudSearch service. The core advantages of this service are elasticity, speed and scaling which meansthat your website will be able to handle even 400 million customers should they simultaneously decide to search for a specific item across your assets.

Additionally, setting up CloudSearch will facilitate indexation of your files and configuring the search parameters to make them smart and convenient for your customers and users. For example, such features as free text, Boolean, and Faceted search and autocomplete suggestions are taken for granted by Internet users because of their availability in the major search engines, but these features require complex algorithms and may be costly to implement as a standalone functionality.

However, if you make use of the already implemented AWS features including support for 34 languages, your users would feel at home navigating your system. Another business driver for you might be customisable relevance ranking and query-time rank expressions which helps to better target your marketing efforts and to show up in the right place at the right time with your offering.

Struggling with big data

As your business grows, so does the amount of data that it generates. This may include various types of operational metrics, logs, application status updates, access entries or configuration changes, geolocation information about objects or process status for activities in a workflow. So on the one hand, large amounts of such information should be stored to meet numerous legal and archival regulations, while on the other hand, business data per se is acquiring commercial value once you decide to leverage it in business analysis. For some businesses such as online gaming, it is also vital that requests to the database are processed quickly and the updates from multiple gamers are synchronised in real time.

To configure and maintain large databases is a mundane and resource-consuming endeavor. Though you can get some relief with agile Amazon SimpleDB which can help to easy a bit your administration costs. SimpleDB is pretty much what it says it is: a flexible non-relational data storage which is at the same time marketed as “highly available” and “easy to administer”. So far so true about the small response time, but another advantage that has caught our eye is multiple geographical distribution of your data. Pretty useful in case when even one of the location fails your service will remain uninterrupted.

Content delivery network

Another feature from Amazon is the content delivery network provided by the service called Amazon CloudFront. It does not offer upfront advantage and would not make much difference for you if you runa small local business. Nevertheless it is worth considering in the long run especially for companies with customers spread across different continents. By integrating with other Amazon Web Services products it helps to distribute content fast, secure and also to reach end users with low latency. Because of fault tolerance and backups in other locations we would recommend this service to public limited companies whose investors set high demands to quick disaster recovery.

The bottom line

The odds are that cloud technologies have come to stay, but like any major infrastructure change requires time and effort to implement as well as an adaptation period to go through. AWS does offer a bunch of sophisticated services but it also costs a lot of money. If you have less than 10 servers, it is probably early to migrate to AWS. But once you cross this threshold and feel that your business is getting affected by traffic surges migration may save you up to 60%.

The migration strategy described in this article is just one of many ways to do it, but based on our experience we aimed at slow immersion and step-by-step exploration of the cloud. The major hidden perk of un-grounding your business system is that the future innovations and changes would be easier to implement and the price of experiment, even if failed, would be considerably lower. And in today’s race for innovation, the game is worth the candles.

The Box advantage: Growth, predictability and competitiveness

(c)iStock.com/ngkaki

Although Box’s stock value has had its ups and downs since its IPO in January, going public sent an important signal to investors. Aaron Levie has demonstrated that a subscription-based service is now the only viable business model in the technology industry today.

When Salesforce debuted in 2004, there was very little market awareness of recurring revenue-based business models. They were mostly associated with print media and telecoms. Eight years later, Workday’s successful IPO convinced many sophisticated institutional firms of the viability of the subscription model, but Main Street investors largely remained on the sidelines.

But the recent liquidity events of HubSpot, Zendesk, New Relic, and Box have brought this model squarely into the mainstream. What has made the broader investor community finally warm to the idea of recurring revenue-based business models?

They now see that their subscription-based business models give SaaS companies advantages in growth, predictability and competitiveness. Let’s start with growth.

Growth

In a traditional business model, your revenue resets at the beginning of every financial quarter. You sell your product, you recognise the revenue, and ninety days later you start all over again at zero. 

But in a recurring revenue model, you begin every quarter with guaranteed revenue. It’s similar to the difference between salaries and royalties. You have to work for every penny of your salary, and it stops when you leave your job. But royalties accumulate over time as you produce more work, leading to a larger and larger income base.

As a result, the path to growth for subscription companies is much simpler. There is less effort in chasing after every dollar, and more focus on monetising ongoing relationships. As long as a subscription company has a positive recurring profit margin, it can choose to invest aggressively in growth while prudently managing expenses.

That’s why the median growth for SaaS subscription companies is over three times that of traditional enterprise software companies. Box’s revenue last quarter was $62.6 million (up 61% year-over-year), but its bookings were $82 million (an increase of 33%). The bookings figure is your real growth metric – it’s a forward-looking number that allows Box to manage its costs effectively.

Predictability

Next fiscal year, Box expects their revenue to be in the range of $281 to $285 million. That’s an exceptionally tight range! How can they make such a narrow estimate? Because they can precisely calibrate losses against guaranteed future revenue.

Infact, that’s why subscription businesses are more comfortable running at temporary losses. Those losses don’t stem from the vagaries of the market, they are completely premeditated. Box knows that next year they can bring their expenses down as needed. As long as that money is spent on growth and market share, it’s a good investment.

What’s more, it’s one thing to acknowledge that customer success is important, but it’s another to orient and define your company around it. This isn’t about surveys and thank you calls – smart recurring revenue-based businesses like Box build their entire corporate DNA around ongoing customer outreach and a laser-like attention to customer usage patterns.

In today’s subscription economy, widget-based metrics like units, margins, and inventory have been replaced by relationship metrics like renewals, upsells, and churn. And investors recognise that with greater customer visibility comes greater fiscal predictability.

Competitiveness

When you market a stand-alone product – a mobile phone, an automobile, or an electronic device – you’re also selling it to your competitors. They are happy to wait in line outside your store, purchase your new product right off the shelf, then hand it over to their R&D people and reverse-engineer it for their customer base. 

Unfortunately, not all your customers are as enthusiastic as Apple fanboys. As a result, they’re often sitting on older models of your product, and are vulnerable to switching over to your competitor. If I’m going to spend the money on an upgrade, why not try something new?

But with a subscription service, as long as you keep investing in innovation and fine-tuning your service, your customer gets used to things getting better and better on their own. There’s no reason to switch – in fact there’s every reason to stay.

Not only that, subscription services have invaluable insight into their customers through their usage data: what features you use, when and often you use them, what new services make sense. That’s information your competitors simply do not have, and cannot replicate.

Box’s paying customers include over 50% of the Fortune 500 and over 22% of the Global 2000. Their clients keep renewing, year after year. They are clearly a mission-critical service. As a result they have “negative churn,” meaning the upsells from their existing customer base more than makes up for the revenue lost from defecting clients. 

Conclusion

I’ve discussed why subscription models are more competitive than widget-based businesses by an order of magnitude. But lastly, I think a big reason that mainstream investors are jumping into subscription models is because they see a marked difference in their own spending habits. From transportation to software to media, consumers and enterprises alike are shifting from static products to ongoing services. And like all astutely managed subscription companies, Box will continue to grow more valuable over time – to the markets and their customers.

While my last piece focused on Box’s post-IPO product plans, in my next piece I’ll be tackling the big debate facing the SaaS industry today: profits versus growth.

IT Operations Modernization By @Dana_Gardner | @CloudExpo [#Cloud]

Exelon Corporation employs technology and process improvements to optimize their IT operations, manage a merger and acquisition transition, and to bring outsourced IT operations back in-house.
To learn more about how this leading energy provider in the US, with a family of companies having $23.5 billion in annual revenue, accomplishes these goals we’re joined by Jason Thomas, Manager of Service, Asset and Release Management at Exelon. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

read more

File Governance Policies and Features By @JimLiddle | @CloudExpo [#Cloud]

With recent high profile data breaches companies should ensure they have the five following file governance policies in place in their company to secure their file assets.
Ensure that an Identity Management policy is in-place, is clear, and if one exists that it is validated and checked regularly.
Check whether services and applications can take advantage of existing Identity Management to enable a Single-Sign-On (SSO) rather than promoting Identity Management Sprawl.

read more

No Masters in Disasters Needed By @BDVandegrift | @CloudExpo [#Cloud]

The concept of a cloud facilitating applications is by no means new. Those of us who diagrammed network connectivity around 1993 will recall drawing a big puffy cloud symbol in between two local area networks. The cloud represented the mysterious Internet – that mash-up of routers and other items bouncing our packets back and forth through millions of ports, only to reassemble the bytes on the other end into – hopefully — the same item that was sent.
Today, we have dissipated that nebulous cloud symbol to accurately define its contents of firewalls, load-balancing devices, switches, routers and storage devices. As time passed, we even moved beyond the physical layer to embrace a virtual realm, as an obscure organization called VMware began to puncture its way out of EMC and take hold as a processing juggernaut, without the need for more heavy metal. But the cloud evolution was not completed at this time, by any means.

read more

Painless Polyglot Persistence By @IBMCloud | @CloudExpo [#Cloud]

When it comes to building applications, one database definitely does not fit all. Traditional SQL databases are great for storing highly structured, normalized data and performing analytics and reporting. NoSQL has attracted developers with its awesome flexibility, and JSON-centric document stores like Cloudant make web developers incredibly productive by offering a JavaScript environment from end-to-end.
Recent Big Data challenges have driven the need for a distributed approach to analytics employing MapReduce techniques embodied in software like Hadoop and Spark. So it’s natural that a well-integrated hybrid environment comprised of multiple types of databases and information access paradigms is critical to meeting the business challenges of tomorrow.
In his session at 16th Cloud Expo, Raj Singh, Developer Advocate at IBM Cloud Data Services, will walk through a mobile app powered by a Cloudant NoSQL database that relies on dashDB for analytics and reporting of Twitter data from IBM’s Social Analytics service.

read more

SafeLogic “Sponsor” of @CloudExpo New York | @SafeLogic [#Cloud #IoT]

SYS-CON Events announced today that SafeLogic has been named “Bag Sponsor” of SYS-CON’s 16th International Cloud Expo® New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY.
SafeLogic provides security products for applications in mobile and server/appliance environments. SafeLogic’s flagship product CryptoComply is a FIPS 140-2 validated cryptographic engine designed to secure data on servers, workstations, appliances, mobile devices, and in the Cloud.

read more

10 Resources Every Remote Worker Should Bookmark

It’s undeniable: working remotely is the way of the future. More and more companies are investing in employees that don’t work the typical 9 to 5 because it actually raises productivity while providing flexibility to employees. On top of that, increasingly sophisticated tools are emerging for the remote worker—including Parallels Access, of course! Despite the […]

The post 10 Resources Every Remote Worker Should Bookmark appeared first on Parallels Blog.