Charles Araujo Joins @ExpoDX Faculty | @CharlesAraujo #AI #IoT #IIoT #ArtificialIntelligence #DigitalTransformation

Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight and has been quoted or published in Time, CIO, Computerworld, USA Today and Forbes.

read more

Di Seghposs Joins @CloudEXPO Faculty | @Oracle @DiSeghposs #Cloud #IoT #IIoT #FinTech #DigitalTransformation

Digital transformation has increased the pace of business creating a productivity divide between the technology haves and have nots. Managing financial information on spreadsheets and piecing together insight from numerous disconnected systems is no longer an option. Rapid market changes and aggressive competition are motivating business leaders to reevaluate legacy technology investments in search of modern technologies to achieve greater agility, reduced costs and organizational efficiencies. In this session, learn how today’s business leaders are managing finance in the cloud and the essential steps required to get on the right path to creating an agile, efficient and future-ready business.

read more

IIoT and New Transportation | @ExpoDX @JAdP #AI #IoT #IIoT #SmartCities

In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, added the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor analytics from these different areas will impact government, industry, retail and other processes, as well as life-style choices.

read more

Seven Criteria for Evaluating a Blockchain Business #ExpoDX #FinTech #Blockchain #DigitalTransformation

The insane world of cryptocurrency may be chock full of schemers, scammers, and sharks, but there is more to the world of blockchain than crypto.

In fact, the blockchain story is bifurcating, with crypto and all its craziness on one side, and enterprise blockchain on the other.

While it may be fun to poke our stick at the former, Intellyx’s focus is on enterprise digital transformation – which means that bona fide blockchain-based business models are a key part of this ongoing story of disruption.

Regardless, we receive numerous pitches every day from all manner of blockchain-based companies, from the most serious to the silliest.

When we set up a briefing with one of them, we need to quickly determine which camp it belongs in: serious enterprise play or crazy crypto?

We’re not alone. If you’re interested in this space, regardless of whether you’re an investor, participant, or potential customer, you’ll need to separate the wheat from the chaff yourself.

Here, then, are our seven criteria for evaluating such companies – criteria you may find useful as you navigate the turbulent waters of blockchain.

read more

Back to black: How to ensure a data centre’s critical infrastructure works when needed

For cloud providers and their many customers a robust and continuously available power supply is amongst the most important reasons for placing IT equipment in a data centre. It’s puzzling therefore why so many data centres fail repeatedly in measuring up to such a mission critical requirement.  

Only last month, for example, cloud service providers and communications companies were hit by yet another protracted power outage affecting a data centre in London. It took time for engineers from the National Grid to restore power and meanwhile many thousands of end users were impacted.

Let’s face it – from time to time there will be Grid interruptions. But they shouldn’t be allowed to escalate into noticeable service interruptions for customers. Inevitably, such incidents create shockwaves among users and cloud service providers, their shareholders, suppliers, and anyone else touched by the inconvenience.

The buck stops here

While it’s clear something or someone (or both) are at fault the buck eventually has to stop at the door of the data centre provider.

Outages are generally caused by a loss of power in the power distribution network. This could be triggered by a range of factors, from construction workers accidently cutting through cables – very common in metro areas – to power equipment failure, adverse weather conditions, not to mention human error.       

Mitigating some such risks should be ‘easy’. Don’t locate a data centre near or on a flood plain and ideally choose a site where power delivery from the utilities will not be impaired. This is a critical point. Cloud providers and their customers need to fully appreciate how the power routes between their chosen data centre and through the electricity distribution network – in some cases its pretty tortuous.

Finding the ideal data centre location that ticks all the right boxes is often easier said than done, especially in the traditional data centre heartlands. Certainly, having an N+1 redundancy infrastructure in place is critical to mitigating outages due to equipment failure.

Simply put, N+1 means there is a more equipment deployed than needed and so allows for single component failure. The ‘N’ stands for the number of components necessary to run your system and the ‘+1’ means there is additional capacity should a single component fail. A handful of facilities go further. NGD for example has more than double the equipment needed to supply contracted power to customers, split into two power trains on either side of the building each of which is N+1. Both are completely separated with no common points of failure.

But even with all these precautions a facility still isn’t necessarily 100 percent ‘outage proof’. All data centre equipment has an inherent possibility of failure and while N+1 massively reduces the risks one cannot be complacent. After all, studies show that a proportion of failures are caused by human mis-management of functioning equipment. This puts a huge emphasis on engineers being well trained, and critically, having the confidence and experience in knowing when to intervene and when to allow the automated systems to do their job. They must also be skilled in performing concurrent maintenance and minimising the time during which systems are running with limited resilience.   

Rigorous testing

Prevention is always better than cure. Far greater emphasis should be placed on engineers reacting quickly when a component failure occurs rather than assuming that inbuilt resilience will solve all problems. This demands high quality training for engineering staff, predictive diagnostics, watertight support contracts and sufficient on-site spares.

However, to be totally confident with data centre critical infrastructure come hell or high water, it should be rigorously tested. Not all data centres do this regularly. Some will have procedures to test their installations but rely on simulating total loss of incoming power. But this isn’t completely fool proof as the generators remain on standby and the equipment in front of the UPS systems stays on. This means that the cooling system and the lighting remain functioning during testing.

Absolute proof comes with ‘Black Testing’. It’s not for the faint hearted and many data centres simply don’t do it. Every six months NGD isolates incoming mains grid power and for up to sixteen seconds the UPS takes the full load while the emergency backup generators kick-in.  Clearly, we are only cutting the power to one side of a 2N+2 infrastructure and it’s done under strictly controlled conditions.     

When it comes to data centre critical power infrastructure regular full-scale black testing is the only way to be sure the systems will function correctly in the event of a real problem. Hoping for the best in the event of real-life loss of mains power simply isn’t an option.    

Uptime check list

  • Ensure N+1 redundancy at a minimum, but ideally 2N+x redundancy of critical systems to support separacy, testing and concurrent access
  • Streamlining MTTF will deliver significant returns on backup systems availability and reliability, and overall facilities uptime performance
  • Utilise predictive diagnostics, ensure fit for purpose support contracts, and hold appropriate spares stock on-site
  • Regularly Black Test UPS and generator backup systems
  • Drive a culture of continuous training and practice regularly to ensure staff are clear on spotting incipient problems and responding to real time problems– what to do, and when/when not to intervene

Get smart: Achieving data-driven insights through a modernised IT infrastructure

Many organisations are beginning to rely on real-time insights to drive mission-critical business decisions, but a recent study by Accenture and HfS Research states that nearly 80% of companies can’t make data-driven decisions due to a lack of skills and technology. Based on a global survey, respondents reported that 50% to 90% of their data is unstructured and largely inaccessible.

As Debbie Polishook, group chief executive of Accenture Operations has commented: "Organisations need to maximise the use of ‘always on’ intelligence to sense, predict and act on changing customer and market developments."

In part, that means companies need to develop a data-driven backbone that can capitalise on the explosion of structured and unstructured data from multiple sources to gain new insights in order to achieve stronger outcomes. It also means leveraging integrated automation and analytics to understand business challenges and then applying the right combination of tools to find the right answers.

All good in theory, but how? Especially given the challenges of today’s distributed data environments that often comprise on-premise, private clouds, public and hybrid clouds, and colocation facilities spread across a single enterprise with multiple, geographically-dispersed locations.

Visibility and control, no matter the data environment

A simple solution to this problem is modernising your IT infrastructure to support the optimal mix of distributed data environments while gaining better visibility and control to access key insights that move the needle toward peak performance.

Cloud infrastructure tools provide IT staff with greater visibility and real-time insight into power usage, thermal consumption, server health, and utilisation. The key benefits are better operational control, infrastructure optimisation, and reduced costs – benefits that enhance an organisation’s business operations and adhere to their balance sheet regardless of their specific cloud strategy, or whether their infrastructure resides on-premises or at a colocation facility.

Let’s consider an organisation weighing whether to migrate its data to the public cloud. Its IT staff would first need to access how its systems perform internally and then determine the needs of its applications, including memory, processing power, and operating systems. By virtue of their ability to collect and normalise data, cloud infrastructure tools help IT teams better understand their current implementation on-premise, empowering them to make data-driven decisions as to what to provision in the cloud.

Power and cooling, solved

Today’s high-density servers generate more heat than ever, overstressing cooling systems and consuming more power than legacy equipment. Data centre and colocation facility teams sometimes deploy solutions to measure and manage Power at Rack and PDU levels but have little visibility at the server level. There are also challenges in managing power at the appropriate times and determining what is the optimal target temperature for every section of the data centre. The traditional method of cooling data centres does not take into account the actual needs of the servers. Consequently, cooling devices will operate inefficiently because they do not accurately anticipate cooling requirements.

In addition to providing data centre managers with real-time power consumption data, giving them the clarity needed to lower power usage, increase rack density, and prolong operation during outages, cloud infrastructure tools provide insight into thermal levels, airflow and utilisation. By retrieving server inlet air temperature and providing this information to the building management system to control the cooling system in the data centre, cloud infrastructure tools help data centre managers reduce energy consumption by precisely controlling the amount of cooling required. Cloud infrastructure tools aggregate the server level information at rack, row, and room levels, calculating efficiency metrics, developing three-dimensional thermal maps of the data centre, and determining optimal temperature. The derived metrics and indexes, along with server sensor information, help identify and address data centre energy efficiency issues such as hotspots.

Automated health monitoring, not spreadsheets

Through ongoing monitoring, analytics, diagnostics and remediation, data centre operators can employ a health management approach to addressing the risk of costly downtime and outages. For those IT staff who take an automated approach to data centre health, continuously monitoring and flagging issues within their complex data centre environments in real-time, more than half, according to a recent survey, can identify and remedy these issues within their data centre within 24 hours. Those data centre managers who perform health checks manually — either by walking the floor, squinting at a spreadsheet, or worse, only after an outage event — are denying themselves the real-time insights cloud infrastructure tools can provide to keep their facilities up and running and their business reputation intact, to say nothing of the financial repercussions from extended downtime.

Cloud infrastructure tools deliver the visibility and operational control to optimise on-premise, private clouds, colocation, as well as public and hybrid cloud models. These software solutions and products maintain a vigil on power, thermal consumption, server health and utilisation, allowing better data-driven decision-making no matter your distributed data environment.

DevOps: You Ain’t Seen Nothing Yet | @DevOpsSummit #AI #DevOps #Docker

The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point.
In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happening in hardware and elsewhere in the stack.

read more

“Let’s take a field trip” – What’s important for students and teacher when working with a Mac?

Last week, on March 27, 2018, at 8 a.m., the first Apple® launch event of 2018 took place. This event focused mainly on the education sector and was being held at Chicago’s Lane Tech College Prep School. The biggest news was that Apple will bring out a new $329 iPad® to take on Google Chromebook™ […]

The post “Let’s take a field trip” – What’s important for students and teacher when working with a Mac? appeared first on Parallels Blog.

IoT and Connected Transportation | @ThingsExpo #AI #IoT #IIoT #SmartCities

As ridesharing competitors and enhanced services increase, notable changes are occurring in the transportation model. Despite the cost-effective means and flexibility of ridesharing, both drivers and users will need to be aware of the connected environment and how it will impact the ridesharing experience. In his session at @ThingsExpo, Timothy Evavold, Executive Director Automotive at Covisint, discussed key challenges and solutions to powering a ride sharing and/or multimodal model in the age of connected vehicles.

read more

Enterprise #AI and #MachineLearning | @ExpoDX #IoT #ArtificialIntelligence

Artificial intelligence and machine learning systems are made up of code and algorithms, and as such, they work as fast as computers can process them. Often this means massive amounts of learning can be accomplished every second without stop 24x7x365. Code doesn’t need to take weekends off, holidays, or sick time. Code doesn’t get tired. It can recognize complex patterns, areas of potential improvement and problems in real-time (aka digital-time). Given these available computing capabilities and speeds, what are executives to do with AI and machine learning, when we live and operate in relatively slow human-time, and work within organizations that work at an even slower pace of organizational-time.

read more