Symantec, Frost Data Capital to incubate startups solving IoT security challenges

Symantec and FDC are to incubate ten IoT security startups per year

Symantec and FDC are to incubate ten IoT security startups per year

Symantec is teaming up with venture capital firm Frost Data Capital to incubate startups primarily developing solutions to secure the Internet of Things.

The companies initially plan to create and seed up to ten early-stage startups with funding, resources and expertise, with Symantec offerings access to its own security technologies and Frost Data Capital its data analytics platforms.

“We’re taking a fresh look at driving innovation in the market and this partnership will enable Symantec to transform raw ideas and concepts into meaningful security companies,” said Jeff Scheel, senior vice president, strategy, alliances and corporate development at Symantec. “By collaborating with Frost Data Capital, we create an environment primed to incubate new, innovative and disruptive startups in cyber security – especially in the realm of IoT technologies where verticals like process control, automotive, health care and energy require specialized skills.”

The goal is to encourage development of threat detection analytics services capable of being applied in IoT architectures, where data volume and velocity can be particularly acute challenges when it comes to security and performance.

“We’re seeing a huge opportunity in the IoT security market,” said John Vigouroux, managing partner and president of Frost Data Capital. “We’re excited to work with Symantec to bring cutting-edge, relevant security analytics solutions to market rapidly, in order to prevent next generation cyber attacks on corporate infrastructures. Symantec brings to the table world-class security technology, global presence and strategic relationships that will be instrumental to launching these startups.”

Symantec and FDC are not the only firms looking to incubate startups with a view towards developing IoT solutions that complement their own offerings. Cisco recently announced significant efforts to incubate French and UK startups innovating in the area of IoT networks, while Intel and Deutsche Telekom unveiled similar moves in Europe last year.

Microsoft buys Islraeli security startup Adallom for $320, plans Israel cybersecurity centre – report

Microsoft has reportedly acquired Adallom for $320m in a cloud security push

Microsoft has reportedly acquired Adallom for $320m in a cloud security push

Microsoft has apparently added Israeli cloud security startup Adallom to its arsenal, with multiple reports claiming the software company paid nearly $320m for the firm. The reports also suggest Microsoft is planning to open a cyber security centre in the region using some of the local talent it has acquired.

Adallom has not confirmed the acquisition, while Microsoft spokespeople told BCN that the company has “nothing to share” about the reports.

Adallom (an abbreviation of the Hebrew saying “ad halom,” which means “up to here” or “the last line of defence”) is a security service that integrates with the authentication chain of a range of SaaS applications and lets IT administrators monitor usage for every user on each device.

The software works with a conjunction of end-point and network security solutions and has a built-in, self-learning engine that analyses user activity on SaaS applications and assesses the riskiness of each transaction in real-time, alerting administrators when activity becomes too risky for an organisation given its security policies.

The company, which has its headquarters in California and a research and development outfit in Israel, was founded by cybersecurity veterans Assaf Rappaport, Ami Luttwak and Roy Reznik in 2012.

The acquisition, first reported by Israeli business paper Globes, comes over half a year after its last security purchase; according to that report Microsoft plans to put Adallom and a number of other Israeli startups at the core of a new cybersecurity centre in Israel, a thriving hub from cybersecurity startups.

In November last year Microsoft ended months of speculation when it confirmed it bought another Israel-based security startup, Aorato, which offered software that tracks user behaviour when accessing applications linked to Active Directory, both in the cloud and on premise.

Tech News Recap for the Week of 7/13/2015

Were you busy last week? Here’s a quick tech news recap of articles you may have missed from the week of 7/13/2015.

Tech News RecapSpending in the cloud IT infrastructure market is estimated to reach $33.4 billion this year. Gartner released its Magic Quadrant for x86 server virtualization. VMware and Microsoft are the leaders. Rackspace is pushing support for Azure.

Tech News Recap

Container management tools are a hot topic. Download our latest whitepaper, “10 Things to Know About Docker

 

 

By Ben Stephenson, Emerging Media Specialist

IT consultancy Mindtree buys Bluefin to bolster SAP expertise

Mindtree has acquired Bluefin to bolster its SAP cred

Mindtree has acquired Bluefin to bolster its SAP cred

Mindtree has acquired Bluefin Solutions, an IT consultancy with particular expertise in SAP software, for an undisclosed sum. Krishnakumar Natarajan, chief executive and managing director of  Mindtree told BCN the move will help boost its European presence and its competencies around IoT, in-memory computing, and mobile.

Headquartered in the UK, Bluefin delivers a range of IT consultancy services with a specialisation in SAP technology, and Natarajan said the acquisition will bolster its reach in traditional European enterprises and public sector organisations, and create opportunities to bring its HANA cloud expertise to the US.

“SAP is not only a powerhouse of innovation, it is the commercial backbone of many of the largest global enterprises,” Natarajan said. “Mindtree and Bluefin can now offer unique integrated front-end, back-end and support services with unrivalled expertise on a global scale. This is essential to truly global organisations looking to use technology to digitize the entire value chain.

James Appleby, group chief executive of Bluefin Solutions told BCN that while its clients continue to look to it for expertise in many traditional areas where SAP has some tech leverage – BI, EPM, CRM, trade investment solutions – its clients are increasingly looking to take those platforms to the cloud, a strong growth area for the company.

“One of our most interesting client-observations is in the UK Public Sector, where the coincidental timing of government cut backs and the maturing of new technologies has been a disruptive force of innovation, particularly around citizen engagement, willingness to share and the opportunities offered by cloud,” he said.

“We certainly see an increased uptake of SaaS solutions in large enterprises with C4C really only taking off in the last 12 months in a meaningful way.  IaaS is now the default choice in many organisations for non-productive solutions and the decisions organisations are taking regarding HANA will increase the uptake of IaaS both as a platform for productive and non-productive use.”

He explained SAP’s HANA Enterprise Cloud had some teething problems at first, which wasn’t helped by the way the firm priced its consumption-based licensing, but that its PaaS – HANA Cloud Platform – remains massively underexploited in today’s market.

“Currently we are seeing it being used to extend SaaS applications but it is a powerful modern platform which could deliver much more for clients in terms of value,” he said.

KT, Nokia launch Internet of Things lab

KT and Nokia said the lab will be a testing ground for IoT innovators

KT and Nokia said the lab will be a testing ground for IoT innovators

Korean telco KT, alongside Nokia Networks, has announced the launch of the country’s first dedicated lab for progressing the development of the internet of things, making good on its MoU pledge at MWC earlier this year, reports Telecoms.com.

Nokia Networks has slated the lab to be the bedrock of its targeted “Programmable World” project by utilising the convergence of IT and telecoms. It claims small and medium-sized IoT firms looking for advice, expertise and an environment in which to test new products and ideas will be able to make the best use of the lab.

The launch of the lab shows the progress being made in the IoT space, after KT and Nokia signed a memorandum of understanding to develop an IoT lab facility at Mobile World Congress in March. Andrew Cope, Nokia’s head of Korea, said LTE-M (the LTE network enabling M2M communications) is a key basis of the lab’s capabilities, and displayed his pleasure in having the lab ready so soon after the MoU announcement at MWC.

“Executing upon an agreement signed at MWC15, Nokia Networks and KT have taken another step forward on an exciting journey that will culminate in the creation of the ‘Programmable World’ in Korea and beyond,” he said. “After showcasing the world’s first LTE-M for interconnection of sensors, we have now created Korea’s first IoT lab – a solid-point of our commitment to standardise LTE-M and create a strong and sustainable ecosystem.”

Yun Kyoung-Lim, KT’s head of future convergence said the lab’s approach to collaboration in IoT is essential to its development and to seeing its potential realised.

“Together with Nokia Networks, we are leveraging upon the convergence of IT and Telecommunications to hasten our transformation into an ICT powerhouse,” he said. “Furthermore, this lab is a strong iteration of our vision to become the number one player in Korea’s IOT market. Our efforts are aimed at encouraging greater participation by domestic companies, which are a crucial factor in driving the change towards a creative IoT-based economy.”

What Apps Can’t You Live Without?

The average smartphone user spends roughly 3 hours a day using their device. Admittedly, it can be hard to unglue ourselves from those screens when amazing apps are constantly being developed. The Parallels team is definitely guilty of a little app addiction—in fact, we decided to compile a list of awesome apps out there! Here […]

The post What Apps Can’t You Live Without? appeared first on Parallels Blog.

Cloud Migration: From Monolith to Microservices | @CloudExpo #Microservices

Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools.
A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or business processes it is used for.

read more

Secure DevOps Automation for AWS From @PalerraInc | @DevOpsSummit #DevOps #Containers #Microservices

Palerra, the cloud security automation company, announced enhanced support for Amazon AWS, allowing IT security and DevOps teams to automate activity and configuration monitoring, anomaly detection, and orchestrated remediation, thereby meeting compliance mandates within complex infrastructure deployments.

«Monitoring and threat detection for AWS is a non-trivial task. While Amazon’s flexible environment facilitates successful DevOps implementations, it adds another layer, which can become a target for potential threats. What’s more, securing infrastructure and meeting compliance mandates is not all up to Amazon and should be a shared responsibility with the organization using it,» said Rohit Gupta, co-founder and CEO of Palerra. «We have supported AWS since day one and are thrilled to announce we are expanding our support to provide holistic visibility across complex AWS deployments consisting of diverse infrastructure resources globally.»

read more

Docker Logentries Container | @DevOpsSummit #DevOps #Docker #Containers #Microservices

Logentries offers a variety of ways to get logs out of your containerized environment, including our Linux Agent, application plugin libraries, and Syslog. In this post we’ll cover collecting and forwarding logs via our Docker Logentries Container, which requires Docker 1.5 or higher.
To configure the Docker Logentries Container you’ll need to do the following:
Create a destination log in your Logentries account to record your Docker logs.

read more

Disaster recovery: Where time matters

(c)iStock.com/ziggymaj

Disasters can strike at any time. They may be caused by human error, cyber-attacks or by natural disasters such as earthquakes, fires, floods and hurricanes. Even so it’s quite tempting to sit back and relax, to not worry about the consequences of these upon one’s business – perhaps for cost reasons, but investments in business continuity are like an insurance policy. It’s not just about disaster recovery because the best way to prevent downtime is to keep a step ahead of any potential disaster scenario.

Yet when unforeseen incidents do occur, the organisation’s disaster recovery plan should instantly kick in to ensure that business continuity can be maintained with either no interruption or a minimal amount of it. An e-commerce firm, for example, could lose sales to its competitors if its website goes down. Downtime can also damage the company’s brand reputation. For these reasons alone business continuity can’t wait, and so large volumes of data need to traditionally have a batch window for data for backup and replication. This becomes increasingly challenging with the growth of big data.

Avoiding complacency

So are organisations taking business continuity seriously? They are according to Claire Buchanan, chief commercial officer (CCO) at Bridgeworks: “I think that most businesses take business continuity seriously, but how they handle it is another thing”. In other words it’s about how companies manage disaster recovery and business continuity that makes the difference.

These two disciplines are in many respects becoming synonymous too. “From what I understand from Gartner, disaster recovery and business continuity are merging to become IT services continuity, and the analyst firm has found that 34% of inbound calls from corporate customers, those that are asking for analyst help, is about how they improve their business continuity”, she says.

Phil Taylor, Director and Founder of Flex/50 Ltd concurs with this view, stating that a high percentage of organisations are taking disaster recovery and business continuity seriously. “Businesses these days can’t afford to ignore business continuity particularly because of our total dependence on IT systems and networks”, he says. The on-going push for mobile services and media rich applications will, he says, generate increasing transaction rates and huge data volumes too.

Buchanan nevertheless adds that most businesses think they are ready for business continuity, but once disasters actually strike the real problems occur. “So what you’ve got to be able to do is to minimise the impact of unplanned downtime when something disruptive happens, and with social media and everything else the reputational risk with a business not being able to function as it should is huge”, she explains. In her experience the problem is that consciousness slips as time goes on.

Bryan Foss, a visiting professor at Bristol Business School and Fellow of the British Computer Society, finds: “Operational risks have often failed to get the executive and budgetary attention they deserve as boards may have been falsely assured that the risks fit within their risk appetite.” Another issue is that you can’t plan for when a disaster will happen, but you can plan to prevent it from causing the loss of service availability, financial or reputational damage.

To prevent damaging issues from arising Buchanan says organisations need to be able to provide support for end-to-end applications and services where availability is unaffected by disruptive events. When they do occur, the end user shouldn’t notice what’s going on – it should be transparent, according to Buchanan. “We saw what happened during Hurricane Sandy, and the data centredata centres in New York – they took a massive hit”, she says.  The October 2012 storm damaged a number of data centredata centres and took websites offline.

Backup, backup!

Traditionally, backing up is performed overnight when most users have logged off their organisation’s systems. “Now, in the days where we expect 24×7 usage and as the amount of data is every increasing the backup window is being squeezed more than ever before, and this has led to solutions being employed that depend on an organisation’s Recovery Point Objectives (RPO) and the Recovery Time Objectives (RTO)”, Buchanan explains.

“For some organisations such as financial services institutions, where these are ideally set at zero, synchronous replication is employed, and this suggests that the data copies are in the same data centre or the data centredata centres are located a few miles or kilometres from each other”, she adds. This is the standard way to minimise data retrieval times, and this what most people have done in the past because they are trying to support data synchronisation. Yet placing data centredata centres in the same circle of disruption can be disastrous whenever a flood, terrorist attack, power outage and so on occurs.

With other organisations an RTO and RPO of a few milliseconds is acceptable and so they can be placed further apart, but this replication doesn’t negate the need for backing up with modern technologies which allow machines to be backed up whilst they are still operational.

Comparing backups

Her colleague and CEO of Bridgeworks, David Trossell, adds that backup-as-service (BaaS) can help by reducing infrastructure-related capital investment costs. “It’s simple to deploy and you only pay for what you use, however, the checks and balances with BaaS needn’t be treated any differently from on-site backupbackups”, he explains. In other words when backup is installed within a data centre, performance is governed by the capability of the devices employed – such as tape or disks.  In contrast performance with BaaS is governed by its connection to a cloud service provider, and Trossell says this defines that speed at which data can be transferred to a cloud.

“A good and efficient method of moving data to the cloud is essential, but organisations should keep a backup copy of the data on-site as well as off-site and this principle applies to BaaS”, he advises.

Essentially, this means that a cloud service provider should secure the data in another region where the CSP operates. Also, in some circumstances it might be cheaper to bring the backup function in-house or with certain types of sensitive data a hybrid cloud approach might be more suitable.

Time is the ruler

Trossell says time is the ruler of all things, and he’s right. The challenge though is for organisations to be able to achieve more than 95% bandwidth utilisation from their networks. This is because of the way that the network protocol TCP/IP works. “Customers are using around 15% of their bandwidth, and some people try to run multiple streams which you have to be able to run down physical connections from the ingress to the egress in order to attain 95% utilisation”, reveals Buchanan.

For example, one Bridgeworks customer needed to backup 70TB of data using a 10GB WAN. The process took the customer 42 days to complete. “They were looking to replicate their entire environment which was going to cost up to £2m, and we put in our boxes within half an hour as a proof of concept”, she explains. Bridgeworks’ team restricted the bandwidth on the WAN network to 200MB, which resulted in the customer being able to complete an entire backup within just 7 days – achieving “80% expansion headroom on the connection and 75% on the number of days they clawed back”, she says. The customer has also since then been able to increase their data volumes.

Providing wider choice

“At the moment with outdated technology CEOs and decision-makers haven’t had the choice with regards to the distance between their data centredata centres without having to think about the impact of network latency, but WANrockIT is given the decision-maker the power to make a different choice to the one that has been historically made”, says Trossell. He claims that WANrockIT gives decision-makers freedom, good economics, a high level of frequency and it maximises the infrastructure in a way that means that their organisations don’t need to throw anything away.

Phil Taylor nevertheless concludes with some valid advice: “People need to be clear about their requirements and governing criteria because at the lowest level all data should be backed-up…, and business continuity must consider all operations of a business – not just IT systems”.

To ensure that a disaster recovery plan works, it has to be regularly tested. Time is of the essence, and so data backupbackups need to be exercised regularly with continuous availability in a way that maintenance doesn’t also prove to be disruptive. Testing will help to iron out any flaws in the process before disaster strikes.