Archivo de la categoría: News & Analysis

Twitter acquires machine learning start-up Magic Pony

Twitter has stepped up its efforts in the machine learning arena after announcing the acquisition of visual processing technology company Magic Pony.

While the company claims machine learning is central to the brands capabilities, it has been relatively quiet in the market segment in comparison to industry heavy weights such as IBM, Google and Microsoft. This is the third acquisition the team has made in this area, reported to be in the range of $150 million, following the purchase of Whetlab last year and Mad Bits in 2014, compared to Google who acquired Jetpac, Dark Blue Labs and Vision Factory, as well as $500 million on DeepMind, all in 2014.

“Machine learning is increasingly at the core of everything we build at Twitter,” said Jack Dorsey, Twitter CEO. “Magic Pony’s machine learning technology will help us build strength into our deep learning teams with world-class talent, so Twitter can continue to be the best place to see what’s happening and why it matters, first. We value deep learning research to help make our world better, and we will keep doing our part to share our work and learnings with the community.”

The acquisition follows Twitter’s announcement last week advertisers will now be able to utilize emoji keyword targeting for Twitter Ads. Although a simple proposition in the first instance, the new features did open up the opportunity for machine learning enhanced advertising solutions.

Magic Pony, which was founded in 2014 and currently has 11 employees, was acquired to bolster the visual experiences that are delivered across Twitter apps. The team will link up with Twitter Cortex, the in-house machine learning department, to improve image processing expertise.

The technology itself makes use of the abilities of convolutional neural networks to scale-out an image. By taking the information in a picture, the technology imagines a larger and more in-depth image by scaling out the detail which it sees. Much in the same way a human can imagine the rest of a car by seeing the door, the technology learns lessons from previous experiences and applies logical decisions moving forward.

Magic Pony itself was initially supported by investment from Octopus Ventures who have seemingly found a specialty in finding promising AI start-ups. Prior to Magic Pony being acquired by Twitter, Octopus Ventures invested it Evi which was acquired by Amazon in 2012, and SwiftKey which was acquired by Microsoft this year.

“Today marks a great day for the Magic Pony team,” said Luke Hakes, Investment Director at Octopus Ventures. “We’re proud to have believed in the concept early on and to then have had the privilege of joining their journey. The technology Magic Pony has developed is revolutionary and pushes the boundaries of what is possible with AI in the video space.

“The UK continues to grow as the ‘go-to’ place for companies looking to build best in breed AI technology – Octopus has been fortunate to work with the founders of three companies in this space that have gone on to be acquired, with Evi and Amazon, SwiftKey and Microsoft, and now Magic Pony and Twitter. We are excited for the Magic Pony team, but also to take what we have learnt on the last three journeys and help the next generation of entrepreneurs lead the way in the on-going AI revolution.”

Demystifying the three myths of cloud database

cloud question markThe cloud is here and it’s here to stay. The cost savings, flexibility and added agility alone mean that cloud is a force to be reckoned with.

However, many businesses are struggling to figure out exactly how to get the most out of the cloud; particularly when choosing what infrastructure elements to leave on-premises and which to migrate to the cloud. A recent SolarWinds survey found that only 42 per cent of businesses will have half or more of their organisations’ total IT infrastructure in the cloud within the next three to five years. Furthermore, seven per cent say their organisation has not yet migrated any infrastructure at all to the cloud, though many of these plan to once they have considered what to transfer and how to do it.

One of the more controversial moves when it comes to migrating infrastructure to the cloud is the database. Hesitancy in making the shift to the cloud is clear, with nearly three quarters (73%) of organisations stating they have yet to do so – but why is this?

The database is often seen as the most critical and important piece of IT infrastructure when it comes to performance, and lies at the heart of most applications, meaning changes are perceived as being risky. If there is a negative effect when moving or changing the way it operates, a ripple effect could impact on the entire business, for example losing important data.

While on some level this fear is justifiable, there are certainly a few reasons which could be defined as myths, or misconception, rather than reality:

Myth 1: Need high performance and availability? The cloud is not a suitable fit.

Several years ago during the early days of the cloud, the ‘one size fits all’ approach may have been fact, however with the natural maturation of the technology we’re at a point where databases in the cloud can meet the needs of even the most demanding applications.

The reality of today’s cloud storage systems is that there are very powerful database services available on the cloud, many based on SSD drives offering up to 48,000 IOPS and 800MBps throughout per instance. Also, while outages in the cloud were a common annoyance two to three years ago, today’s cloud providers often exceeds that of what most on-premises systems are able to deliver. Today’s cloud provider SLAs combined with the ease of setting replicas, standby systems and the durability of the data stored are often able to deliver better results.

This is not to say that the database administrator (DBA) is free of responsibility. While the cloud provider will take care of some of the heavy lifting that is involved with configurative and administrative tasks, the DBA is still responsible for the overall performance. Therefore, the DBA needs to still pay close attention to resource contention, bottlenecks, query tuning, execution plans, etc. – some of which may mean new performance analysis tools are needed.

Myth 2: The cloud is not secure.

Even though security should always be a concern, just because you can stroll into a server room and physically see the server racks doesn’t necessarily mean they are more secure than the cloud. In fact, there have been many more high profile security breaches involving on-premises compared to public cloud.

The truth is the cloud can be extremely secure, you just need a plan. When using a cloud provider, security is not entirely their responsibility, instead it needs to be thought of as a shared job – they provide reasonably secure systems, and you are responsible for secure architecture and processes.

You need to be very clear about the risks, the corporate security regulations which need to be abided by and the compliance certifications that must be achieved. Also, by developing a thorough understanding of your cloud provider’s security model, you will be able to implement proper encryption, key management, access control, patching, log analysis, etc. to complement what the cloud provider offers and take advantage of their security capabilities. With this collaborative approach to security and in-depth understanding of one another, you can ensure that your data is safe, if not safer, than if it were physical server racks down the hall.

Myth 3: If I use cloud I will have no control of my database.

This is another half-truth. Although migrating your database to the cloud does hand over some of the day-to-day maintenance control to your provider, when it comes to performance your control won’t and shouldn’t be any less.

As mentioned above, an essential step to ensure that you remain in control of your database is to understand your cloud provider’s service details. You need to understand their SLAs, review their recommended architecture, stay on top of new services and capabilities and be very aware of scheduled maintenance which may impact your job. Also, it’s important to take into account data transfer and latency for backups and to have all your databases in sync, especially if your database-dependent applications need to integrate with another one and are not in the same cloud deployment.

Finally, keep a copy of your data with a different vendor who is in a different location. If you take an active role in managing backup and recovery, you will be less likely to lose important data in the unlikely event of vendor failure or outage. The truth is that most cloud providers offer plenty of options, giving you the level of control you need for each workload.

Conclusion

The decision to migrate a database to the cloud is not an easy one, nor should it be. Many things need to be taken into account and the benefits and drawbacks need to be weighed up. However, given the tools available and the maturity of the cloud market today, deciding not to explore cloud as an option for your database could be short-sighted.

Written by Gerardo Dada, Head Geek at SolarWinds

Connected home will be operated by Apple and Google

Research from Gartner has claimed 25% households in developed economies will utilise the services of digital assistants, such as Apple’s Siri or Google Assistant, on smartphones as the primary means to interact with the connected home.

The user experience is an area which has been prioritized by numerous tech giants, including those in the consumer world, as the process of normalizing the connected world moves forward. Although IoT as a concept has been generally accepted by industry, efforts to take the technology into the wider consumer ecosystem are underway.

Connecting all IoT applications under a digital assistant could be a means to remove the complexity of managing the connected home, playing on the consumer drive for simplicity and efficiency. The digital assistant also presents an entry point for artificial intelligence, as appliances and systems in the home can be optimized alongside information available over the internet. Energy consumption, for example, could potentially be reduced as the digital assistant optimizes a thermostats levels dependent on current weather conditions.

“In the not-too-distant future, users will no longer have to contend with multiple apps; instead, they will literally talk to digital personal assistants such as Apple’s Siri, Amazon’s Alexa or Google Assistant,” said Mark O’Neill, Research Director at Gartner. “Some of these personal assistants are cloud-based and already beginning to leverage smart machine technology.”

The process of normalizing IoT in the consumer world will ultimately create a number of new opportunities for the tech giants, as the technology could offer a gateway into the home for a number of other verticals. Banks and insurance companies for example, could offer advice to customers on how they could save money on bills, should they have access to the data which is generated in the connected home.

“APIs are the key to interoperating with new digital interfaces and a well-managed API program is a key success factor for organizations that are interested in reaching consumers in their connected homes,” said O’Neill. “In the emerging programmable home, it is no longer best to spend time and money on developing individual apps. Instead, divert resources to APIs, which are the way to embrace the postapp world.”

25% of New Yorkers have no broadband access

Digital Device Tablet Laptop Connection Networking Technology ConceptResearch from the Wireless Broadband Alliance highlighted broadband connectivity is no longer a challenge reserved for rural areas, as 57% of the world’s urban population is currently unconnected, reports Telecoms.com.

Initiatives to increase the number of people who have a consistent connection to the internet has predominantly focused around rural communities, though the report demonstrated there are still a number of advanced western cities who have higher numbers than maybe expected. The research showed New York and Los Angeles currently have 27% and 25% of their populations who would be classed in the “Urban Broadband Unconnected” category. Shanghai was another city where the percentage of unconnected urban citizens seems high at 42%.

While New York, Los Angeles and Shanghai could be seen as technologically advanced cities, the seemingly high number of unconnected citizens could be attributed to the diversity in wealth and affluence. The report claims the numbers could be driven simply by broadband not being available in certain neighbourhoods, but also the price of broadband being unaffordable. While this would almost certainly be considered a ‘first-world problem’, there could be a potential impact on other areas of society, for example politics, as more communications move online, in particular to social media.

The CIA World Fact Book lists the USA as one of the world’s most affluent countries, accounting for $55,800 GDP per capita, which makes the statistics taken from two of its leading cities perhaps more surprising, though it does provide clarity to the high percentages in other nations. Lagos and Karachi were two of the cities which demonstrated the highest number of unconnected urban citizens at 88% and 86% respectively, though their GDP per capita is listed at $6,100 and $5,000, and are two countries which have been typically associated with political unrest.

“There is a clear divide between the digital haves and the digital have-nots,” said Shrikant Shenwai, CEO of the Wireless Broadband Alliance. “And while this divide generally mirrors socioeconomic trends around the world, there are surprisingly high levels of urban unconnected citizens in major cities.

“World Wi-Fi (June 20) Day is an opportunity to recognize the contributions being made to help connect the unconnected around the globe, whether they be in major cities or rural communities.”

The report evaluated 18 of the worlds’ leading cities including Tokyo, Dusseldorf, New Delhi, Johannesburg and London, which was listed as the worlds’ most connected city as only 8% of the population are unconnected currently. Europe was the most connected continent demonstrating the lowest levels of unconnected citizens at 17%, while in Asia Pacific 68% of its urban citizens were unconnected.

Oracle sets sights on IaaS market as it reports 49% cloud growth

Oracle CloudOracle has reported its 2016 Q4 results stating growth over the period declined 1% to $10.6 billion, though its cloud business grew 49% to $859 million, reports Telecoms.com.

2016 has seen Oracle spend almost $2 billion on cloud-specific organizations, as the tech giant continues efforts to transform the Oracle business focus to the burgeoning cloud market. While Oracle could be seen as one of the industry’s elder statesmen, efforts in the M&A market are seemingly paying off as PaaS and SaaS continues to demonstrate healthy growth to compensate for the dwindling legacy business units. The team have also outlined plans to make strides in the IaaS market segment.

Growth in the SaaS and PaaS business has been accelerating in recent years as CEO Safra Catz quoted 20% growth in 2014, 34% in 2015, and now 52% over the course of FY 2016. Q4 gross margin for SaaS and PaaS was 57%, up from 40% during the same period. The progress of the business would appear to be making healthy progress, and Catz does not seem to be content with the current growth levels. The team have ambitions to raise gross margin to 80% in the mid-term, as well as seeing cloud year-on-year revenue growth for Q1 FY 2017 of 75% to 80%.

“For most companies as their business grows, the growth rates go down,” said Catz. “In our case, as the business grows, the growth rates are continuing to increase. Now, as regard to our cloud revenue accounting, we have reviewed it carefully and are completely confident that it is a 100% accurate and if anything slightly conservative.”

Moving forward, CTO Larry Ellison highlighted the team plan on driving rapid expansion of the cloud business. The Oracle team are targeting growth rates which would double that of competitors as its ambition is now to be the first SaaS company to make $10 billion in annual revenue. The team are not only targeting the customer experience markets, but also the Enterprise Resource Management and Human Capital Management segments, where it believes there will be higher growth rates.

“We’re a major player in ERP and HCM,” said Ellison. “We’re almost the only player in supply chain and manufacturing. We’re the number one player in marketing. We’re very competitive. We’re number one – tied for number one in service.”

Secondly, the team will also be aiming to facilitate growth through expanding it IaaS data centre focus, which is currently an ‘also ran’ part of the cloud business. Ellison claims Oracle is in a strong position to grow in this area, having invested heavily second generation data centres, as well the potential for the combination of PaaS and IaaS for the company’s installed base of database customers, helping them move to the cloud.

“And we built, again, the second generation data centre, which we think is highly competitive with anything out there lower cost, better performance, better security, better reliability than any of our competitors, and there’s huge demand for it, and we’re now starting to bring customers into that,” said Ellison. “We think that’s another very important driver to Oracle for overall growth.”

The last few years have seen a considerable transformation in the Oracle business, as it has invested considerably in the development of new technology, as well as acquisitions, seemingly hedging its bets to buy its way into the cloud market. The numbers quoted by Catz and Ellison indicate there has been some traction and the market does seem to be reacting positively to the new Oracle proposition.

In terms of the IaaS market, success in this area will remain to be seen. Although Oracle has the potential to put considerable weight behind any move in this market, it is going to be playing catch up with some noteworthy players, who have cash themselves. Whether Oracle has the ability to catch the likes of AWS, Microsoft Azure and Google, as well as the smaller players in the market, remains to be see, though its success in the SaaS and PaaS markets does show some promise.

Mozilla Firefox launches container feature for multiple online personas

FirefoxThe Mozilla Firefox team has announced it will integrate a new containers driven feature to allow users to sign into multiple accounts on the same site simultaneously.

While the concept of using technology to manage multiple accounts and different personas is not a new idea, the practicalities have been out of reach. With the new feature, users will be able to sign into multiple accounts in different contexts for such uses as personal emails, work accounts, banking, and shopping. Twitter is one of the most relevant examples in the immediate future, as it is not uncommon for individuals to have multiple twitter account for work and personal life.

“We all portray different characteristics of ourselves in different situations,” said Tanvi Vyas, one of the security engineers working on the project, on the company blog. “The way I speak with my son is much different than the way I communicate with my coworkers. The things I tell my friends are different than what I tell my parents. I’m much more guarded when withdrawing money from the bank than I am when shopping at the grocery store. I have the ability to use multiple identities in multiple contexts. But when I use the web, I can’t do that very well.

“The Containers feature attempts to solve this problem: empowering Firefox to help segregate my online identities in the same way I can segregate my real life identities.”

The Mozilla Firefox team are one of the first to have cracked the equation, though it does admit there are a number of challenges to come. Questions which the team now need to answer include:

  • How will users know what context they are operating in?
  • What if the user makes a mistake and uses the wrong context; can the user recover?
  • Can the browser assist by automatically assigning websites to Containers so that users don’t have to manage their identities by themselves?
  • What heuristics would the browser use for such assignments?

“We don’t have the answers to all of these questions yet, but hope to start uncovering some of them with user research and feedback,” said Vyas. “The Containers implementation in Nightly Firefox is a basic implementation that allows the user to manage identities with a minimal user interface.”

Containers for Web

Machine learning front and centre of R&D for Microsoft and Google

Dear Future Im Ready, message on paper, smart phone and coffee on tableMicrosoft and Google have announced plans to expand their machine learning capabilities, through acquisition and new research offices respectively, reports Telecoms.com.

Building on the ‘Conversation-as-a-Platform’ proposition put forward by CEO Satya Nadella at Build 2016, the Microsoft team has announced plans to acquire Wand Labs. The purchase will add weight to the ‘Conversation-as-a-Platform’ strategy, as well as supporting innovation ambitions for Bing intelligence.

“Wand Labs’ technology and talent will strengthen our position in the emerging era of conversational intelligence, where we bring together the power of human language with advanced machine intelligence,” said David Ku, Corporate Vice President of the Information Platform Group on the company’s official blog. “It builds on and extends the power of the Bing, Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

More specifically, Wand Labs adds expertise in semantic ontologies, services mapping, third-party developer integration and conversational interfaces, to the Microsoft engineering team. The ambition of the overarching project is to make the customers experience more seamless by harnessing human language in an artificial environment.

Microsoft’s move into the world of artificial intelligence and machine learning has not been a smooth ride to date, though this has not seemed to hinder investment. Back in March, the company’s AI inspired Twitter account Tay went into melt-down mode, though the team pushed forward, updating its Cortana Intelligence Suite and releasing its Skype Bot Platform. Nadella has repeatedly highlighted artificial intelligence and machine learning is the future for the company, stating at Build 2016:

“As an industry, we are on the cusp of a new frontier that pairs the power of natural human language with advanced machine intelligence. At Microsoft, we call this Conversation-as-a-Platform, and it builds on and extends the power of the Microsoft Azure, Office 365 and Windows platforms to empower developers everywhere.”

Google’s efforts in the machine learning world have also been pushed forward this week, as the team announced dedicated machine learning research based in the Zurich offices, on its blog. The team will focus on three areas specifically, machine intelligence, natural language processing & understanding, as well as machine perception.

Like Microsoft, Google has prioritized artificial intelligence and machine learning, though both companies will be playing catch-up with the likes of IBM and AWS, whose AI propositions have been in the market for some time. Back in April, Google CEO Sundar Pichai said in the company’s earnings call “overall, I do think in the long run, I think we will evolve in computing from a mobile first to an AI first world,” outlining the ambitions of the team.

Google itself already has a number of machine learning capabilities incorporated in its product portfolio, those these could be considered as relatively rudimentary. Translate, Photo Search and SmartReply for Inbox already contains aspects of machine learning, though the team are targeting more complex and accurate competencies.

Elsewhere, Twitter has announced on their blog advertisers will now be able to utilize emoji keyword targeting for Twitter Ads. This new feature uses emoji activity as a signal of a person’s mood or mind set, allowing advertisers to more effectively communicate marketing messages minimizing the potential for backlash of disgruntled twitter users. Although the blog does not state the use of machine learning competencies, it does leave the opportunity for future innovation in the area.

Samsung acquires containers-cloud company Joyent

Money Tree, Currency, Growth.Samsung has agreed to buy San Francisco based cloud provider Joyent in an effort to diversify its product offering in declining markets, reports Telecoms.com.

Financial for the deal have not been disclosed, however the team stated the acquisition will build Samsung’s capabilities in the mobile and Internet of Things arenas, as well cloud-based software and services markets. The company’s traditional means of differentiating its products have been through increased marketing efforts and effective distribution channels, though the new expertise will add a new string to the bow.

“Samsung evaluated a wide range of potential companies in the public and private cloud infrastructure space with a focus on leading-edge scalable technology and talent,” said Injong Rhee, CTO of the Mobile Communications business at Samsung. “In Joyent, we saw an experienced management team with deep domain expertise and a robust cloud technology validated by some of the largest Fortune 500 customers.”

Joyent itself offers a relatively unique proposition in the cloud market as it runs its platform on containers, as opposed to traditional VM’s which the majority of other cloud platforms run on. The team reckons by using containers efficiency it notably improved, a claim which is generally supported by the industry. A recent poll run on Business Cloud News found 89% of readers found container run cloud platforms more attractive than those on VMs.

While smartphones would now be considered the norm in western societies, the industry has been taking a slight dip in recent months. Using data collected from public announcements and analyst firm Strategy Analytics, estimates showed the number of smartphones shipped in Q1 2016 fell to 334.6 million units from 345 million during the same period in 2015. The slowdown has been attributed to lucrative markets such as China becoming increasingly mature, as well as pessimistic outlook from consumers on the global economy.

As a means to differentiate the brand and tackle a challenging market, Samsung has been looking to software and services offerings, as creating a unique offering from hardware or platform perspective has become next to impossible. In terms of the hardware, the latest release of every smartphone contains pretty much the same features (high-performance camera, lighter than ever before etc.), and for the platform, the majority of the smartphone market operates on Android. Software and services has become the new battle ground for product differentiation.

Last month, the team launched its Artik Cloud Platform, an open data exchange platform designed to connect any data set from any connected device or cloud service. IoT is a market which has been targeted by numerous organizations and is seemingly the focus of a healthy proportion of product announcements. The launch of Artik Cloud puts Samsung in direct competition with the likes of Microsoft Azure and IBM Bluemix, as industry giants jostle for lead position in the IoT race, which has yet to be clarified. The inclusion of Joyent’s technology and engineers will give Samsung extra weight in the developing contest.

The purchase also offers Samsung the opportunity to scale its own scale its own cloud infrastructure. The Samsung team says it’s one of the world’s largest consumers of public cloud data and storage, and the inclusion of Joyent could offer the opportunity to move data in-house to decrease the dependency on third party cloud providers such as AWS.

As part of the agreement, CEO Scott Hammond, CTO Bryan Cantrill, and VP of Product Bill Fine, will join Samsung to work on company-wide initiatives. “We are excited to join the Samsung family,” said Hammond. “Samsung brings us the scale we need to grow our cloud and software business, an anchor tenant for our industry leading Triton container-as-a-service platform and Manta object storage technologies, and a partner for innovation in the emerging and fast growing areas of mobile and IoT, including smart homes and connected cars.”

IBM launches weather predictor Deep Thunder for The Weather Company

cloud storm rainIBM’s Weather Company has announced the launch of Deep Thunder to help companies predict the actual impact of various weather conditions.

By combining hyper-local, short-term custom forecasts developed by IBM Research with The Weather Company’s global forecast model the team hope to improve the accuracy of weather forecasting. Deep Thunder will lean on the capabilities of IBM’s machine learning technologies to aggregate a variety of historical data sets and future forecasts to provide fresh new guidance every three hours.

“The Weather Company has relentlessly focused on mapping the atmosphere, while IBM Research has pioneered the development of techniques to capture very small scale features to boost accuracy at the hyper local level for critical decision making,” said Mary Glackin, Head of Science & Forecast Operations for The Weather Company. “The new combined forecasting model we are introducing today will provide an ideal platform to advance our signature services – understanding the impacts of weather and identifying recommended actions for all kinds of businesses and industry applications.”

The platform itself will combine more than 100 terabytes of third-party data daily, as well as data collected from the company’s 195,000 personal weather stations. The offering can be customized to suit the location of various businesses, with IBM execs claiming hyper-local forecasts can be reduced to between a 0.2 to 1.2 mile resolution, while also taking into account other factors for the locality such as vegetation and soil conditions.

Applications for the new proposition can vary from the agriculture to city planning & maintenance to validating insurance claims, however IBM has also stated consumer influences can also be programmed into the platform, meaning retailers could manage their supply chains and understand what should be stocked on shelves with the insight.

Court of Appeals hits back at US telco industry with net neutrality ruling

Lady Justice On The Old Bailey, LondonThe District of Columbia Circuit Court of Appeals has hit back at the US telcos industry, ruling in favour of government net neutrality regulations, reports Telecoms.com.

Although the decision will be appealed to the US Supreme Court, the decision marks a victory for FCC chairman Tom Wheeler’s camp in the FCC, which has been split over the dispute. Republican commissioner Michael O’Rielly championed efforts opposing Wheeler’s Democratic team, though the decision does appear to move US carriers closer to the realms of utilities.

“Today’s ruling is a victory for consumers and innovators who deserve unfettered access to the entire web, and it ensures the internet remains a platform for unparalleled innovation, free expression and economic growth,” said Wheeler in a statement. “After a decade of debate and legal battles, today’s ruling affirms the Commission’s ability to enforce the strongest possible internet protections – both on fixed and mobile networks – that will ensure the internet remains open, now and in the future.”

The decision itself will now ensure US carriers cannot block, degrade or promote internet traffic, which has been strongly opposed by the telecoms industry and members of the Republican Party. The argument against has been based around the idea of an ‘open internet’ where free-trade rules the roost. Texas Senator Ted Cruz once described the move towards net neutrality as “Obamacare for the internet”, believing it is burdensome and would create an environment of over-regulation for the internet.

The ruling also hits back at claims made by industry attorneys that ISPs are like newspaper editors, and thus have the right to edit content which flows over its network. This has been struck down by the DC Court of Appeals stating ISPs should view themselves as ‘conduits for the messages of others’ as opposed to dictating the opinions which are viewed on the internet.

While this would be considered a victory for the Wheeler camp inside the FCC, the dispute is likely to continue for some time. AT&T has already announced it will be appealing the decision and Verizon has stated its investments in Verizon Digital Media Services would be at risk without an open Internet.

The dispute on the whole has seen conflicting opinions at every level. The ruling from the DC Court of Appeals also demonstrated similar conflicts, with Senior Circuit Judge Stephen Williams stating “the ultimate irony of the Commission’s unreasoned patchwork is that, refusing to inquire into competitive conditions, it shunts broadband service onto the legal track suited to natural monopolies.”

In terms of opposition within the FCC itself, O’Rielly said in a statement “If allowed to stand, however, today’s decision will be extremely detrimental to the future of the Internet and all consumers and businesses that use it. More troubling is that the majority opinion fails to apprehend the workings of the Internet, and declines to hold the FCC accountable for an order that ran roughshod over the statute, precedent, and any comments or analyses that did not support the FCC’s quest to deliver a political victory.”

The other Republican Commissioner at the FCC Ajit Pai stated “I am deeply disappointed by the D.C. Circuit’s 2-1 decision upholding the FCC’s Internet regulations. The FCC’s regulations are unnecessary and counterproductive.”

The end of this dispute will unlikely to be seen for some time, and there are strong arguments for both camps. On the commercial side represented by the Republican Party and the telco industry, there has to be a means to commercialize the billions of dollars invested infrastructure. AT&T, Verizon, etc are not charities. However, the net neutrality camp containing the Democrat Party and the FCC Chairman insists there has to be an element of control. There is a requirement for telcos to be held accountable, and invoking the First Amendment right to free speech in this context could potentially have dangerous consequences from a commercial and political perspective.