In the spirit of leaving Las Vegas I started thinking about the transformation of Las Vegas from a desert oasis created by organized crime to a billion-dollar industry. How did big business and Wall St. push the mob out of Vegas? Movies and television shows, such as Casino, Goodfellas, and the Sopranos illustrate a side of organized crime that few rarely ever witness or have witnessed. Las Vegas was created by organized crime to service their syndicate with money laundering, prostitution, entertainment and schmoozing services. It worked splendidly; until it didn’t. Something changed and it created a landslide that eventually saw Vegas’ founders ousted and replaced with a larger, more powerful and adept landlord—mega-corporations. How they accomplished this is a lesson that clearly should be noted by IT.
Believe it or not, Governance, Risk and Compliance (GRC) was the tool of choice in ousting organized crime from Vegas. Big business and government made it an inhospitable environment for crime syndicates to operate; at least with regard to gaming and hoteling. The first step was to foster transparency upon the casinos. Long suspected for ‘rigging’ of games, local and federal government initiatives pushed for regulation and compliance for gaming. This had the impact of reducing the ill-gotten gains for organized crime, while lowering the risk for gamblers since the odds were considerably greater in their favor without the magnetic roulette ball or aces tucked under the blackjack table.
Archivo mensual: mayo 2012
Cloud Computing: Symform Announces Sponsorship of Cloud Expo New York
Symform, a revolutionary cloud storage and backup service, on Wednesday announced that it will be speaking, sponsoring and exhibiting at the upcoming cloud and high-tech industry event Cloud Expo New York.
On Monday, June 11 at 5:45 p.m. ET Symform execs Praerit Garg and Margaret Dawson will speak on «The Distributed and Decentralized Cloud.» Dawson is also a featured speaker at the Cloud Boot Camp, where she is speaking on “Extending Your Existing Infrastructure and Data to the Cloud” with Pavan Pant, Director of Product Management at CloudSwitch, Terremark, on Thursday, June 14 at 8:00 a.m. ET. Finally, Symform is sponsoring and presenting a lightening talk at the unconference Cloud Camp Tuesday, June 12.
Data Center Fabric for Cloud Computing at Cloud Expo New York
Enterprise IT organizations want to deploy a virtualized data center fabric that will provide the foundation for agile private cloud computing. Getting there does not have to be difficult, but it does require a new approach to data center infrastructure design – an approach that is non-disruptive, vendor-agnostic, and very adaptable to changing business requirements.
In his session at the 10th International Cloud Expo, Bruce Fingles, Chief Information Officer and VP of Product Quality at Xsigo, will look at the limitations of traditional thinking, and show how extending the power of virtualization can help maximize performance, minimize complexity, and empower IT organizations with an agile data center fabric.
GoGrid’s Private Cloud Computing Solution Powers Business Services
GoGrid on Wednesday announced its Private Cloud infrastructure solution is powering a new network management application for Orange Business Services, the France Telecom-Orange branch dedicated specifically to business-to-business services.
“We’re excited to work with Orange Business Service as they cloud-enable this important infrastructure monitoring service for their customers,” said Jeffrey Samuels, CMO, GoGrid. “GoGrid’s Private Cloud was an ideal solution for their requirements because it provides true cloud-computing capabilities that can scale as needed and offer the requisite control and flexibility.”
There is Only One Cloud-Computing Myth
The only cloud-computing myth is that there are cloud-computing myths.
Instead, there are many articles about cloud myths, an endless parade of strawman arguments put out by writers, analysts, and marketers that lectures us on why we’re so stupid to believe the “myths” that cloud is inexpensive, is easy to deploy and maintain, that it automatically reduces costs, etc.
Anyone who’s ever written a line of code or approved an enterprise IT contract knows there are no simple solutions and no universal solvents in their world. Never have been and never will be.
However, there are many powerful arguments in favor of enteprises migrating some of their apps and processes to the cloud, and there is a separate consumer-cloud industry that allowed me to listen to Igor Presnyakov rip through AC/DC’s “All Night Long” and Dire Straits’ “Sultans of Swing” on my Android phone last night.
I thank Google for the latter opportunity, even as the company remains as enigmatic as Mona Lisa about what’s going on behind the scenes.
It’s too bad Google is not one of our great sharers, because the enterprise IT shops of the world could no doubt learn a lot more about cloud computing from watching Google at work than it can from using Google Apps.
But enough whining. Each organization needs to find its own cloud, and this should be a rigorous, perhaps time-consuming process. Discussion of particular cloud strategies and vendors should come at the end of this process. First, figure out what you want to do and why.
A nice cost analysis is helpful, of course, but my brain starts to seize up when the term “ROI” is put into play. At this point, it becomes a contest to game the system and produce an ROI forecast that will have a false advertising of direct impacts of the technology on the company’s business. When used to justify technology, ROI and its sinister cousin, TCO, are the enemies of business success.
A nice thing about cloud is that the heated political and religious debates over Open Source have been (mostly) replaced by practical arguments over which specific product, framework, or architecture provides the best option for a particular initiative. If discussion of cloud should come at the end of the overall decision-making process, discussion of Open Source should come at the end of that discussion.
Don’t try to transform the organization overnight. This will happen on its own as more and more cloud floats into the enterprise. And don’t believe in the myth that there are cloud myths. There aren’t; only more wondrous technology that needs to be examined carefully as you continue the eternal quest to keep things as unscrewed up as possible in your organization.
enStratus Named “Coffee Break Sponsor” of Cloud Expo 2012 New York
SYS-CON Events announced today that enStratus Networks, provider of the enStratus cloud infrastructure management solution, has been named “Day 2 and Day 3 Morning Coffee Break Sponsor” of SYS-CON’s 10th International Cloud Expo, which will take place on June 11–14, 2012, at the Javits Center in New York City, New York.
enStratus is a cloud infrastructure management solution for deploying and managing enterprise-class applications in public, private and hybrid clouds. enStratus has a multi-cloud architecture that provides governance, automation and cloud independence.
Report Examines Leading Vendors of EMR/EHR Technology for Small Physician Practices
IDC Health Insights has released a new IDC MarketScape report designed to guide firms evaluating electronic medical record/electronic health record (EMR/EHR) vendors providing solutions to small physician practices. The new report, IDC MarketScape: U.S. Ambulatory EMR/EHR for Small Practices 2012 Vendor Assessment (Document #HI234732) provides an assessment of eleven EMR/EHR products from nine U.S.-based vendors that target small physician practices and qualify for American Recovery and Reinvestment Act of 2009 (ARRA) incentives. In the report, IDC Health Insights provides an opinion on which vendors are well-positioned today through current capabilities and which are best positioned to gain market share over the next one to four years. Vendors included in the report are: ADP AdvancedMD; Allscripts; athenahealth; eClinicalWorks; Greenway Medical Technologies, Inc.; LSS (MEDITECH); Lumeris; Optum (OptumInsight); and Practice Fusion.
IDC Health Insights expects the U.S. market to move from less than 25% adoption in 2009 to over 80% adoption by 2016. This anticipated growth is primarily influenced by regulatory stipulations and government incentives under the ARRA; additional trends include the quality of care improvements that result from using EMRs/EHRs in ambulatory practices, their growing capabilities and use of cloud computing, the use of mobile devices in ambulatory practices, and the consolidation of provider vendors as market saturation increases.
According to Judy Hanover, IDC Health Insights research director, “ARRA presents an unprecedented opportunity for providers in small practices to garner federal incentives for demonstrating meaningful use of clinical applications that will help to improve the quality of care, enhance patient safety and prepare their practices for the future. However, the EHR technology itself, the requirements and deadlines for achieving meaningful use and capturing incentives, and the need to change their business practices and integrate the new technology into practice patterns, present complex issues and challenges. If providers allow the constraints of meaningful use to dictate their technology choices and limit the goals for implementation, they may only see the short-term incentives and not the long-term strategic advantage that EHR can bring to their practices and may fail to compete under healthcare reform.”
With hundreds of small practice EMR/EHR vendors participating in the market, the vendors included in this report were carefully selected to include the top five market leaders in the U.S., and a selection of additional vendors that offer compelling technology, strategies or services, such as advanced software-as-a-service (SaaS) offerings, innovative pricing or service options, platforms or architecture capabilities. This IDC Marketscape highlights the attributes and key capabilities that providers should look for when selecting an EMR/EHR, and offers a guide for using best practice-based approaches to leveraging an EMR/EHR to build competitive advantage in small practices.
Each product was evaluated against 25 criteria in two category measures for success: strategies and capabilities. Within each of these criteria, IDC Health Insights has weighted specific features of the product or the product’s vendor that are particularly significant for purchasers of the software and for users. A significant and unique component of this evaluation is the inclusion of customer references for all of the products included in the assessment.
Ms. Hanover will review the results of the IDC MarketScape in a one-hour, complimentary Web conference, EHR in the Small Ambulatory Practice: An IDC MarketScape Analysis, on Wednesday, June 6 at 12:00 p.m., U.S. Eastern time. She will also review best practices for implementing EMR/EHR in small ambulatory practices. Register here: http://bit.ly/JjEGj7.
i2c, KargoCard Partner for Prepaid, Mobile Payments, Loyalty Solutions in China
i2c, Inc., a payment processing technology company, and KargoCard, a Shanghai-based prepaid service provider, have partnered to deliver prepaid, mobile payment and loyalty solutions to merchants in China. i2c will provide payment processing services to enable KargoCard to offer a robust suite of products within China.
“KargoCard is an established leader in their market with a rapidly growing distribution network, strong management team and an impressive client list,” said Amir Wain, CEO of i2c. “Their drive to revolutionize the Chinese payments industry aligns perfectly with i2c’s commitment to bring innovation to global payments.”
In a recent report by Mercator Advisory Group, China was estimated to be the world’s largest prepaid market in terms of market potential. With several high-profile clients like Beard Papa, Happy Lemon and Cloud Nine, KargoCard is set to grow exponentially in this market. To support KargoCard and i2c’s growing presence in the Asia-Pacific region, i2c is building out a new data center in Shanghai.
“Our tremendous growth and aggressive expansion plans require an established processing platform that offers superior flexibility and scalability. i2c’s platform provides this, as well as a rich feature set that will allow us to better serve our customers,” said KargoCard CEO David Suzuki.
Cloud Computing Bootcamp at Cloud Expo New York
Want to understand in just hours what experts have spent many hundreds of days deciphering?
The new, «super-sized» four-day Cloud Computing Bootcamp is a brief introduction to cloud computing carefully created and devised to help you keep up with evolving trends like Big Data, PaaS, APIs, Mobile, Social and Data Analytics. Solutions built around these topics require a sound cloud computing infrastructure to be successful while helping customers harvest real benefits from this transformational change that is happening in the IT ecosystem.
Where Is the Cloud Going? Try Thinking “Minority Report”
I read a news release (here) recently where NVidia is proposing to partition processing between on-device and cloud-located graphics hardware…here’s an excerpt:
“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realizing their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”
As well as the split processing that is handled by the Silk browser on the Kindle Fire (see here), I started thinking about that “processing partitioning” strategy in relation to other aspects of computing and cloud computing in particular. My thinking is that, over the next five to seven years (at most by 2020), there will be several very important seismic shifts in computing dealing with at least four separate events: 1) user data becomes a centralized commodity that’s brokered by a few major players, 2) a new cloud-specific programming language is developed, 3) processing becomes “completely” decoupled from hardware and location, and, D) end user computing becomes based almost completely on SoC technologies (see here). The end result will be a world of data and processing independence never seen that will allow us to live in that Minority Report world. I’ll describe the events and then will describe how all of them will come together to create what I call “pervasive personal processing” or P3.
User Data
Data about you, your reading preferences, what you buy, what you watch on TV, where you shop, etc. exist in literally thousands of different locations and that’s a problem…not for you…but for merchants and the companies that support them. It’s information that must be stored and maintained and regularly refreshed for it to remain valuable, basically, what is being called “big data.” The extent of this data almost cannot be measured because it is so pervasive and relevant to everyday life. It is contained within so many services we access day in and day out and businesses are struggling to manage it. Now the argument goes that they do this, at great cost, because it is a competitive advantage to hoard that information (information is power, right?) and eventually, profits will arise from it. Um, maybe yes and maybe no but it’s extremely difficult to actually measure that “eventual” profit…so I’ll go along with “no.” Now even though big data-focused hardware and software manufacturers are attempting to alleviate these problems of scale, the businesses who house these growing petabytes…and yes, even exabytes…of data are not seeing the expected benefits—relevant to their profits—as it costs money, lots of it. This is money that is taken off the top line and definitely affects the bottom line.
Because of these imaginary profits (and the real loss), more and more companies will start outsourcing the “hoarding” of this data until the eventual state is that there are 2 or 3 big players who will act as brokers. I personally think it will be either the credit card companies or the credit rating agencies…both groups have the basic frameworks for delivering consumer profiles as a service (CPaaS) and charge for access rights. A big step toward this will be when Microsoft unleashes IDaaS (Identity as a Service) as part of their integrating Active Directory into their Azure cloud. It’ll be a hurdle for them to convince the public to trust them, but I think they will eventually prevail.
These profile brokers will start using IDaaS because then they don’t have to have separate internal identity management systems (for separate data repositories of user data) for other businesses to access their CPaaS offerings. Once this starts to gain traction you can bet that the real data mining begins on your online, and offline, habits because your loyalty card at the grocery store will be part of your profile…as will your
credit history and your public driving record and the books you get from your local library and…well, you get the picture. Once your consumer profile is centralized, all kinds of data feeds will appear because the profile brokers will pay for them. Your local government, always strapped for cash, will sell you out in an instant for some recurring monthly revenue.
Cloud-specific Programming
A programming language is an artificial language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to create programs that control the behavior of a machine and/or to express algorithms precisely but, to-date, they have been entirely encapsulated within the local machine (or in some cases the nodes of a super computer or HPC cluster which, for our purposes, really is just a large single machine). What this means is that the programs written for those systems need to know precisely where the functions will be run, what subsystems will run them, the exact syntax and context, etc. One slight error or a small lag in the response time and the whole thing could crash or, at best, run slowly or produce additional errors.
But, what if you had a computer language that understood the cloud and took into account latency, data errors and even missing data? A language that was able to partition processing amongst all kinds of different processing locations, and know that the next time, the locations may have moved? A language that could guess at the best place to process (i.e. lowest latency, highest cache hit rate, etc.) but then change its mind as conditions change?
That language would allow you to specify a type of processing and then actively seek the best place for that processing to happen based on many different details…processing intensity, floating point, entire algorithm or proportional, subset or superset…and fully understand that, in some cases, it will have to make educated guesses about what the returned data will be (in case of unexpected latency). It will also have to know that the data to be processed may exist in a thousand different locations such as the CPaaS providers, government feeds, or other providers for specific data types. It will also be able to adapt its processing to the available processing locations such that it elegantly deprecates functionality…maybe based on a probability factor included in the language that records variables over time and uses that to guess where it will be next and line up the processing needed beforehand. The possibilities are endless, but not impossible…which leads to…
Decoupled Processing and SoC
As can be seen by the efforts NVidia is making is this area, it will soon be that the processing of data will become completely decoupled from where that data lives or is used. What this is and how it will be done will rely on other events (see previous section) but the bottom line is that once it is decoupled, a whole new class of device will appear, in both static and mobile versions, that will be based on System on a Chip (SoC) which will allow deep processing density with very, very low power consumption. These devices will support multiple code sets across hundreds of cores and be able to intelligently communicate their capabilities in real time to distributed processing services that request their local processing services…whether over Wi-Fi, Bluetooth, IrDA, GSM, CDMA, or whatever comes next, the devices themselves will make the choice based on best use of bandwidth, processing request, location, etc. These devices will take full advantage of the cloud specific computing languages to distribute processing across dozens and possibly hundreds of processing locations and will hold almost no data because they don’t have to, everything exists someplace else in the cloud. In some cases these devices will be very small, the size of a thin watch for example, but they will be able to process the equivalent of what a super computer can do because they don’t do all of the processing, only what makes sense for the location and capabilities, etc.
These decoupled processing units, Pervasive Personal Processing or P3 units, will allow you to walk up to any workstation or monitor or TV set…anywhere in the world…and basically conduct your business as if you were sitting in from of your home computer. All of you data, your photos, your documents, and your personal files will be instantly available in whatever way that you prefer. All of your history for whatever services you use, online and offline, will be directly accessible. The memo you left off writing that morning in the Houston office will be right where you left it, on that screen you just walked up to in the hotel lobby in Tokyo the next day, with the cursor blinking in the middle of the word you stopped on.
Welcome to Minority Report.