Google Cloud launches in Poland as European data centre expansion continues

Google Cloud’s European expansion continues with the launch of a new region in Poland, alongside unveiling strategic partnerships.

The new region will be hosted in Warsaw as part of Google’s commitment to Central and Eastern Europe. Prospective customers of the region will have access to the usual products, from Compute Engine to App Engine, to Google Kubernetes Engine, Cloud Bigtable, Cloud Spanner, and BigQuery.

“As part of our strategic partnership, DCP (Domestic Cloud Provider) will become a reseller of Google Cloud services in Poland and will build managed services capabilities around Google Cloud,” wrote CEO Thomas Kurian in a blog post. “With this DCP partnership, we will be able to boost our support for Polish enterprises, providing advanced infrastructure and software that suits their needs.

“Together, our goal is to accelerate cloud adoption by large and small businesses alike, across all industries,” Kurian added. “Over the next five years we’ll train experts to help Polish businesses onboard to the cloud, as well as provide insights and strategic advice on how companies can maximise the benefits of their cloud deployments.”

Michal Potoczek, chief executive of Poland’s national cloud operator, added: “We believe in a multi-cloud strategy. A Google Cloud region, together with our own infrastructure, will allow us to build hybrid services which will bring even more value to our customers.”

This marks the first data centre launch by one of the big three in Poland. While Microsoft claims to have the widest global reach its next European target is Norway, while for Amazon Web Services (AWS) Italy is the next port of call.

Google had previously opened the doors to its Zurich data centre region in March, while Microsoft, in the European data centre arms race, unveiled plans for Azure availability in Germany and Switzerland last month. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

“Bulletproof” dark web data centre seized by German police

Connor Jones

30 Sep, 2019

German authorities scuppered a pervasive dark web operation on Friday, saying it was being run out of a former NATO bunker.

Seven individuals have been arrested on the suspicion of being associated with organised crime and as accessories to hundreds of thousands of crimes through their hosted dark web platforms such as the Wall Street Market and Cannabis Road.

The outfit is believed to be spearheaded by a 59-year-old Dutchman who, authorities understand, acquired the bunker located in the small town of Traben-Trarbach in 2013.

After buying the bunker, the man who is yet to be named by authorities is claimed to have transformed it into a large and highly secure data centre, designed “exclusively for illegal purposes”, according to prosecutor Juergen Bauer, as reported by the Associated Press.

Dark web marketplaces are infamous for being cornucopias of crime where people can buy drugs, weapons, credit card information, forged documents and more.

As suspects linked to the operation of such as site, 13 suspects in total, aged 20-59, can all be charged as accessories to every crime and transaction that took place on their hosted sites.

“I think it’s a huge success… that we were able at all to get police forces into the bunker complex, which is still secured at the highest military level,” said regional criminal police chief Johannes Kunz. “We had to overcome not only real, or analogue, protections; we also cracked the digital protections of the data centre.”

Authorities described the facility as a “bulletproof hoster”, designed specifically to conceal the activity from law enforcement.

Policing the unknown

The dark web has proven to be a reliable sanctuary for cyber criminals due to its decentralised and anonymous nature. Websites are accessed through The Onion Router (Tor) browser and a user’s connection is redirected through multiple different global locations which makes the identification of an online criminal nigh-on impossible.

The proliferation of cryptocurrencies has also contributed to the anonymity of criminals as, like their web traffic, payments made using cryptos are also beamed through multiple addresses making them difficult track.

It started with bitcoin but since then other cryptocurrencies have gained popularity, and new and more anonymous coins have been devised. Monero is one such coin that’s favoured by criminals as it conceals the sender and recipient’s address more comprehensively than others.

Cryptocurrency tumblers are another tool that hampers policing efforts. They offer a service that’s the cryptocurrency equivalent of money laundering; users send their coins to a tumbling service, pay a fee and get completely different coins in return, further complicating tracking efforts made by authorities.

While authorities have famously been able to clamp down on certain marketplace operations, their success, in some cases, hasn’t been attributed to sophisticated web tracking techniques – the fatal clues have sometimes been found through the criminals’ poor web hygiene.

For example, perhaps the most well-known dark web market Silk Road was eventually seized by authorities after finding posts made by the owner Ross Ulbricht which advertised the marketplace on a ‘clear net’ bitcoin forum along with his personal email address in a separate post.

The network is difficult to crack, but as the FBI evidenced with the seizure of Playpen, they can take down sites if they hack the endpoint. Authorities deployed malware on the abuse-distribution platform that revealed the IP address of any user that clicked on illegal images, leading to the arrest of the site’s operator.

IT Pro contacted the National Cyber Security Agency for comment but it did not reply at the time of publication.

Dedicated global taskforces

As the dark web becomes a more widespread issue, dedicated dark web security organisations have been formed around the world to help tackle the issue.

The seizure of the Alphabay and Hansa marketplaces in 2017 was a global coordinated effort named Operation Bayonet and led by Europol, but required help from law enforcement authorities in Thailand, the Netherlands, Lithuania, Canada, the United Kingdom, and France.

The huge effort required in Bayonet provided the catalyst that led to the formation of Europol’s own dedicated dark web team and the US followed suit six months later with its Joint Criminal Opioid Darknet Enforcement (J-CODE) team.

“Criminals think that they are safe on the darknet, but they are in for a rude awakening,” said Attorney General Sessions on the J-CODE launch. “We have already infiltrated their networks, and we are determined to bring them to justice.

“In the midst of the deadliest drug crisis in American history, the FBI and the Department of Justice are stepping up our investment in fighting opioid-related crimes. The J-CODE team will help us continue to shut down the online marketplaces that drug traffickers use and ultimately that will help us reduce addiction and overdoses across the nation.”

How companies can tell good cloud sprawl from bad: A guide

Now that operating in the cloud is officially mainstream, it’s gotten a reputation for costing more than expected. Cloud sprawl, however, is rarely the problem – in fact, we like to think of sprawl as a symptom. On the bright side, it’s often a symptom of creativity and innovation by your IT team. On the dark side, it’s also often a symptom of poor planning and a lack of governance.

Here’s a guide to structuring (or restructuring) your cloud adoption so that any growth in expenses is tied to commensurate improvements in outcomes.

Unplanned sprawl is always bad

The on-demand nature of cloud infrastructure and services make growth frictionless. It’s as easy to spin up a new server as it is to download a document. While that’s fantastic for a team’s ability to innovate and grow, it’s also a little frightening.

When there’s a complete elimination of hardship in commissioning new cloud services, it’s entirely too easy to exceed your budget. 

What we often see happen is that a company decides to dip a toe in cloud transformation. They don’t worry about creating detailed governance guidelines because they’re treating their cloud use as a trial – they want to see if it’s right for them.

But as soon as engineers see what’s possible in the cloud, they get ideas. They start to figure out creative solutions for the problems they’ve been dealing with for years. And because the cloud is frictionless, they can spin up new resources without batting an eye. Before you know it, the cloud bill is double what you budgeted and you have no plan in place for ensuring that any of your experiments become ROI positive.

…but sprawl itself can be transformative

Of course, the lack of friction is also what makes the cloud such an amazing tool.

For example, in the pre-cloud era, I was working at a hedge fund. A colleague and I wanted to take massive amounts of data and run it through a bunch of scenarios to start doing predictive data arbitrage.

We knew we’d be able to glean valuable insights if we ran our data through enough scenarios and models. But in order to process all the data we’d bought in time for our analysts and quants to evaluate and make decisions about it, we’d have needed about 200 servers, which would have meant millions of dollars in capex spend.

And keep in mind: this was just for research. We had no way to guarantee that our analysis would have a positive ROI. Of course we didn’t get it approved.

With the cloud, though, we could spin up enough servers in a few days, run the scenarios, and spin down the servers. The cost would be much lower, which means the additional revenue needed to justify the project would be lower.

And that’s true in nearly every situation: because you can provision resources on an as-needed basis, you can achieve much smaller returns to justify your investment. And as you discover ways to increase profit incrementally, you can, thanks to the way the cloud works, rinse and repeat.

To ensure positive sprawl, invest in planning and governance

So how can you enjoy the many benefits of cloud infrastructure without suffering from its pitfalls? Start with a plan. 

Even if you only have plans to run a cloud trial, start with a plan. 

The cloud offers near-infinite ways for your engineers to solve problems, and as soon as they get a glimpse, I guarantee they’ll want to try things out. Without a plan in place, they’ll do exactly that, and the next thing you know, you’ll have dozens of active cloud accounts paid for with dozens of different credit cards. (This is a real thing we see.)

So, again: make a plan. Make a plan for how new cloud spend is approved and paid for. Make a plan for reviewing cloud usage and spinning down resources that are no longer useful or active. Make governance guidelines.

If you’re not sure what should go into your cloud plan, consider working with consultants who have experience creating these plans. With a plan in place, you’ll have a clear idea of how you’ll be using cloud resources to improve your business’s operations. Without a plan, you will see sprawl. Most importantly, without a plan, you may end up spending much more than you expected – without achieving the outcomes you were after in the first place.

Creative solutions start with practical guidelines

Sprawl is almost inevitable when a company launches a cloud transformation: because the cloud is capable of more than in-house servers, moving to the cloud inevitably means that companies do more than they used to.

To ensure that you do more in a budget-conscious way, invest time and energy upfront in establishing a plan and governance rules for your cloud usage. Once you’ve laid out rules of the road, you can let your developers explore safely, without racking up unexpected bills. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Three reasons why killing passwords will improve your cloud security

Jack Dorsey’s Twitter account getting hacked by having his telephone number transferred to another account without his knowledge is a wake-up call to everyone of how vulnerable mobile devices are. The hackers relied on SIM swapping and convincing Dorsey’s telecom provider to bypass requiring a passcode to modify his account. With the telephone number transferred, the hackers accessed the Twitter founder’s account. If the telecom provider had adopted zero trust at the customer’s mobile device level, the hack would have never happened.

Cloud security’s weakest link is mobile device passwords

The Twitter CEO’s account getting hacked is the latest in a series of incidents that reflect how easy it is for hackers to gain access to cloud-based enterprise networks using mobile devices. Verizon’s Mobile Security Index 2019 revealed that the majority of enterprises, 67%, are the least confident in the security of their mobile assets than any other device.

Mobile devices are one of the most porous threat surfaces a business has. They’re also the fastest-growing threat surface, as every employee now relies on their smartphones as their ID. IDG’s recent survey completed in collaboration with MobileIron, titled Say Goodbye to Passwords found that 89% of security leaders believe that mobile devices will soon serve as your digital ID to access enterprise services and data.

Because they’re porous, proliferating and turning into primary forms of digital IDs, mobile devices and their passwords are a favorite onramp for hackers wanting access to companies’ systems and data in the cloud. It’s time to kill passwords and shut down the many breach attempts aimed at cloud platforms and the valuable data they contain.

Three reasons why killing passwords improves your cloud security

Killing passwords improve cloud security by:

  • Eliminating privileged access credential abuse. Privileged access credentials are best sellers on the Dark Web, where hackers bid for credentials to the world’s leading banking, credit card, and financial management systems. Forrester estimates that 80% of data breaches involve compromised privileged credentials, and a recent survey by Centrify found that 74% of all breaches involved privileged access abuse. Killing passwords shuts down the most common technique hackers use to access cloud systems.
  • Eliminating the threat of unauthorized mobile devices accessing business cloud services and exfiltrating data. Acquiring privileged access credentials and launching breach attempts from mobile devices is the most common hacker strategy today. By killing passwords and replacing them with a zero-trust framework, breach attempts launched from any mobile device using pirated privileged access credentials can be thwarted. Leaders in the area of mobile-centric zero trust security include MobileIron, whose innovative approach to zero sign-on solves the problems of passwords at scale. When every mobile device is secured through a zero-trust platform built on a foundation of unified endpoint management (UEM) capabilities, zero sign-on from managed and unmanaged services become achievable for the first time.
  • Giving organizations the freedom to take a least-privilege approach to grant access to their most valuable cloud applications and platforms. Identities are the new security perimeter, and mobile devices are their fastest-growing threat surface. Long-standing traditional approaches to network security, including “trust but verify” have proven ineffective in stopping breaches. They’ve also shown a lack of scale when it comes to protecting a perimeter-less enterprise. What’s needed is a zero-trust network that validates each mobile device, establishes user context, checks app authorization, verifies the network, and detects and remediates threats before granting secure access to any device or user. If Jack Dorsey’s telecom provider had this in place, his and thousands of other people’s telephone numbers would be safe today.


The sooner organizations move away from being so dependent on passwords, the better. The three reasons why killing passwords improve cloud security are just the beginning. Imagine how much more effective distributed DevOps teams will be when security isn’t a headache for them anymore, and they can get to the cloud-based resources they need to get apps built.

With more organizations adopting a mobile-first development strategy, it makes sense to have a mobile-centric zero-trust network engrained in key steps of the DevOps process. That’s the future of cloud security, starting with the DevOps teams creating the next generation of apps today. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

UK data centres blitz climate change targets

Keumars Afifi-Sabet

27 Sep, 2019

Data centre operators in the UK have fulfilled their climate change obligations two years ahead of schedule, exceeding the requirement of a 13.52% reduction in power usage by a healthy margin.

Under the climate change agreement (CCA) scheme for data centres, participants are required to reduce their Power Usage Effectiveness (PUE) by 15% by the end of 2020. Calculations by techUK, a trade association for the UK’s tech industry, show the sector achieved a reduction of 16.72%.

“Provisional results from the Climate Change Agreement (CCA) for Data Centres suggest that the sector has successfully met its efficiency target, the third of four milestones in the life of the scheme,” said techUK’s associate director for data centres.

“Collectively, UK operators have performed so well that they have fulfilled the final scheme target two years ahead of schedule. However, at individual facility level, the picture is more mixed, so the sector is not complacent and will be working harder than ever to build on these improvements in the final stage.”

The headline figure serves only as an aggregation for outcomes in 150 sites, with a more detailed examination suggesting there’s plenty of work to be done still. Of 88 target units, with ‘target units’ defined as combinations several data centre sites, 40 passed the requirements while 48 failed.

Those sites which failed to meet their targets and did not have surplus carbon from previous assessment periods were obliged to buy out the carbon needed to meet their targets if they wished to remain certified.

Brexit uncertainty has been cited as a key reason for this failure among sites that have not met their targets, due to a reduction in enterprise customers in the last few years. Older sites that were full at the start of the scheme in 2013 will also be disproportionately affected as they have struggled to realise the benefits of efficiency improvements.

Despite the reported success of UK data centre operators, critics have criticised the PUE metric as not being a robust enough performance metric of energy efficiency. It’s calculated as a ratio of the total amount of energy used by a facility, against the energy delivered to computing equipment.

“The CCA target of 15 per cent improvement in PUE has been criticised by external observers unfamiliar with the commercial data centre business model,” techUK’s report said. “They claim that this is not nearly tough enough.

“Commercial operators providing colocation (or colocation-style services) control the infrastructure and not the IT, which remains a customer matter. PUE is a performance metric limited to infrastructure, so it is the best, or perhaps more accurately the least worst, metric to use for this type of provider.”

The role of the tech industry in exacerbating climate change has come under scrutiny in recent years. This has especially been the case with regards to the role data centres play in maintaining cryptocurrencies like Bitcoin.

Apple has even speculated on the silver linings of climate change, suggesting more natural disasters could fuel iPhone sales.

The CCA was struck following negotiations between the Department for Business Energy and Industrial Strategy (BEIS) and techUK, and is expected to end in 2023. There is, as of yet, no indication that BEIS will devise a replacement energy efficiency programme once the CCA expires.

How to choose the perfect video conferencing kit

Dave Mitchell

26 Sep, 2019

Effective communication has always been critical to success in business – and in today’s global economy that often means working closely with customers and partners in far-flung locations. Email and voice calls might be sufficient for basic exchanges of information, but videoconferencing (VC) offers clear benefits in communication and collaboration – so it’s no surprise that more businesses are embracing it.

VC isn’t just about communicating outside of the company, either. Plenty of modern companies make use of virtual workplaces, where staff and teams don’t share the same physical office. VC provides the facilities for face-to-face meetings, helping staff to be just as efficient and productive as an on-site team. It’s good for morale too, allowing remote workers to feel more involved and less isolated.

When you think of dedicated VC hardware, you might picture the boardroom of a big corporation – but it’s nowadays a perfectly affordable option for SMBs. Indeed, it can pay for itself in mere months by drastically reducing travel costs and minimising the time that employees waste in transit. Environmentally-aware businesses will appreciate how it also reduces the pollution generated by road trips and flights.

Fancy meeting you here

When choosing a VC solution, you need to decide whether you want it to be portable or static. Portable models combine a camera, mics and a speaker in a single compact unit, and only require a USB connection to the computer that’s running your chosen VC application. This makes them ideal for impromptu meetings, and they can be easily moved around and even transported to a client’s premises if need be.

Static models generally feature separate camera and speakerphone units, and are best suited to meeting rooms where the tables and chairs stay in fixed positions. There are also hybrid models, which are too large to call truly portable, but combine the camera and microphone in a single unit that can be carried about to different rooms if required.

If you want to use your VC system in combination with a large, wall-mounted display, you’ll find it convenient to choose a product that provides its own HDMI ports. If your system lacks these, you can still watch the remote side of the conversation on a TV or monitor, but this will need to be connected directly to the computer hosting the meeting app.

It’s also worth looking out for Bluetooth and NFC support, which lets users easily pair their mobiles with the speakerphone unit to make hands-free calls.

Sound chaser

Setting up a VC system can be tedious, as you have to find somewhere to position the camera so that everyone is in shot. However, the latest VC products include a smart new feature designed to solve this. It comes with a range of names, such as “speaker tracking”, “intelligent attention” and “RightSight”; whatever you call it, it uses the input from the microphone to work out where the person speaking is located, and dynamically focus the camera on them.

In our testing we found that the technology works extremely well; some systems even crop and frame the meeting room view to cut out distracting empty space around the speaker. The Owl Labs Meeting Owl makes clever use of a fisheye lens to provide intelligent framing over a full 360° view – perfect for round-table meetings.

Other features worth looking out for are audio-processing technologies that can improve the sound quality of your meetings by automatically identifying and removing background noises such as traffic, keyboard clatter or paper shuffling, allowing listeners to hear the speaker more clearly. Static VC room solutions normally use multiple microphones to ensure everyone can be heard; most mics can easily pick up sound from up to 12ft away, but if you’re organising a meeting around a long table, you should consider products that allow you to add more microphone pods to increase coverage.

Special 4K

When choosing your VC hardware, you may wonder whether it’s worth going for a 4K “Ultra HD” system. In most cases, we suggest that it is. A lower-resolution system may be cheaper, but 4K technology is already firmly entrenched in the consumer market, and economies of scale mean that it won’t be long before it becomes the standard.

The advantages are clear: 4K video has four times as many pixels as a 1080p HD feed. That means far more fine detail is captured, which can help participants pick up on facial expressions and get a good clear view of products and displays. Any text on whiteboards and in presentations will be crystal clear.

The main challenge to 4K uptake is its bandwidth requirements. All things being equal, four times the detail means four times the data. To get a smooth 4K video feed requires at least 15Mbits/sec of dedicated bandwidth and preferably 25Mbits/sec.

If that’s a stretch, new technologies can help. The H.265 HEVC (high efficiency video coding) standard aims to slash 4K bandwidth requirements by as much as half. Other solutions use proprietary encoding, such as the Lifesize software that claims to require as little as 3Mbits/sec for 4K video and 6Mbits/sec for presentations.

Be in my video

If you’re worried about whether your VC hardware will work with your preferred communications platform, fear not. All major systems are USB video class-compliant, so they don’t require any special drivers, and many support a range of VC platforms, including Cisco Webex, Google Hangouts, BlueJeans, Skype for Business, Zoom and more. Even so, we recommend trialling them first to make sure they have the features and mobile support your users demand.

No matter what your needs, sophisticated videoconferencing products are now becoming very affordable for businesses of all sizes – allowing you to embrace the virtual workplace and reap its cost benefits. Read on for our reviews of four quite different VC systems, with differing designs and price points, to find the one that will help you enhance and unify your communications – rather than complicating them.

Best security practices for migrating to the cloud: A guide

As more businesses embark on their digital transformation journey and shift towards a mobile cloud computing model, they must rethink their entire security architecture. Migrating to the cloud effectively marks the end of the traditional network perimeter, which means that the standard security protocols designed to protect the perimeter are no longer fit for purpose. 

A recent report from Outpost24 found that while 42 percent of organisations are concerned about cloud security, 27 percent do not know how quickly they could tell if their cloud data had been compromised. This shows that many organisations are failing to follow cloud security best practices, leaving them vulnerable to security threats. This article outlines some of the best security practice organisations should follow when migrating to the cloud. 

The cloud challenge 

The rising popularity of the cloud cannot be understated. IDG’s 2018 Cloud Computing Study estimated that 77 percent of organisations use cloud services. It’s also been estimated that the average enterprise uses up to almost 1,000 applications. However, the added freedom granted by cloud services also comes with risks. As organisations move to a mobile cloud computing model, their employees have access to critical business data anytime from anywhere, eroding the traditional network perimeter. This has opened up new access points for hackers to exploit and created a massive attack surface that traditional security systems, such as firewalls and gateways, cannot protect against. 

Forget the perimeter, forget trust 

With the modern mobile cloud computing model dissolving the traditional network perimeter, organisations need to adapt. Businesses should look to implement a zero-trust security framework, which has been designed in direct response to the diminishing perimeter. Zero-trust considers an organisation’s network to be already compromised and as a result applies a ‘never trust, always verify’ logic to network access. 

With data flowing freely between various devices and servers in the cloud, there are more potential access points to be exploited. The zero trust model takes this into account and requires the device to be verified, the user’s context to be established, the apps to be authorized, the network to be verified and the presence of threats to be detected and mitigated.  Only after all these checks have been completed will the user be granted access to the data they are trying to view. 

Scrap passwords 

The password is the weakest link in enterprise security – a recent survey conducted by MobileIron and IDG found that 90 percent of the security professionals questioned had seen unauthorised access attempts as a result of stolen credential – and unfortunately, the advent of cloud computing has further exploited the vulnerability of the password. With cloud services and applications presenting organisations with multiple opportunities to streamline the way they handle their data, the risk presented by stolen user credentials has only grown. 

The same IDG survey also found that almost half of enterprise users recycle their password for more than one enterprise application. And with the average enterprise using up to 1,000 different cloud applications, it is highly likely that enterprise users recycle their passwords for different cloud services. Thus, just one stolen password in the modern cloud environment could provide hackers with countless amounts of enterprise data. In order to overcome the pain of passwords, organisations should look to more reliable methods of securing access to their data in the cloud, such as multi-factor authentication, or biometrics.

Good hygiene 

Good security hygiene is always of paramount importance, but even more so when an organisation migrates to the cloud. The advancement of cloud computing has changed the way the modern enterprise works, with mobile devices increasingly being used to access critical business data. In order to best achieve secure access to cloud data, organisations need to understand the environment in which their employees want to work, including what devices they choose to use. 

Organisations can then implement appropriate security protocols. With modern work increasingly taking place on applications on mobile devices, rather than on browsers or desktops, organisations will need to develop a new perimeter defined for the device in order to stop data seeping between cloud apps. 

This is where enrolling devices in a unified endpoint management (UEM) solution becomes essential. Enrolling devices ensures that devices are encrypted and allows IT to enforce appropriate authentication and security policies. It also gives IT the opportunity to delete dangerous apps over-the-air and stop business data from seeping between different cloud-based apps. Enrolling devices in such a way not only serves to maximise the gains in productivity that cloud computing has to offer but also helps to ensure data stored in the cloud is secure. in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Dropbox launches admin controls and collaboration hub amid major workspace push

Keumars Afifi-Sabet

26 Sep, 2019

Dropbox has launched a slew of new features as part of its new desktop app that aims to reposition the file-sharing service as a digital workplace hub.

The headline Dropbox Spaces addition aims to transform folders into collaborative workspaces in which teams can organise and share documents, as well as synchronise calendars and facilitate better communications.

The smart workspace feature will also introduce several key integrations over the course of the next few months, including Paper, HelloSign and Trello. Spaces will also allow workers to add comments directly into files, and configure a notifications feed.

This is part of the company’s wider ambitions to shake off its image as a mere cloud storage service and branch out into productivity. The cloud firm first announced a huge overhaul of its desktop app in June, and announced several features like G Suite and Microsoft Office integration, as well as native integrations with partners like Zoom and Slack.

Dropbox also launched a file-sharing service in July called Dropbox Transfer, which aims to combat the limitations of sharing large files via email. This feature will also make its way into the company’s flagship desktop app in the coming months.

“When we talk about the experience of using technology at work, what was stunning to me, even a few years ago was like, man, our industry just keeps making things more complicated, and just keeps throwing new stuff onto the pile,” Dropbox founder and CEO Drew Houston told Cloud Pro in June. “Like, who’s making everything work together?

“Increasingly we saw that our customers are seeing Dropbox more as this workspace in which they use the Office suites and things like that, which triggered a pretty big mental shift for us and completely changed the concept of the product that we wanted to build.”

The company has also made a number of key changes to Dropbox Business, hoping to make lives easier for IT administrators and team managers. These additions include greater access to employee activity and tools for data security and compliance.

Through the enterprise console, IT admins can gain high-level control and visibility while also delegating levels of control to individual team leaders in appropriate cases. Users can also be reviewed and managed across several workspaces, with different controls and settings based on the needs of each department.

Staff can also be monitored through an activity page with search functionality, and the capacity to filter and report, and take quick action. The firm has also teased a dashboard, set for a future release, that will allow for high-priority user activity to be highlighted.

Puppet’s 2019 State of DevOps report: How security needs to fit into continuous delivery

You’ve got the processes in place for a revamped software delivery cycle in your organisation. The foundation has been built, the stack is in place and the culture is going in the right direction. What are you missing?

Security in DevOps is an ‘unrealised ideal’ and a key step in moving from DevOps good practice to best practice. That’s according to the latest Puppet State of DevOps Report, published earlier today. 

The report, the eighth in total, explored the various journeys organisations were at in security integration. Security, alas, is not a competitive differentiator – getting good product out there is – so the report sympathised with organisations facing the struggle. Though the road ahead is paved with good intentions, it doesn’t change habits – or pay the bills.

In all, almost a quarter (21%) of the 3,000 respondents polled who have the highest levels of security integration – whereby security is present across all phases of the software delivery cycle – say they have reached an advanced stage of DevOps maturity. Only 6% of those with no security integration say they have done so. 

What’s more, if you have the highest level of security integration it means you are more likely to deliver on production demand quickly, cited by 61% of firms. Of those with no security integration, less than half (49%) are able to deploy on demand. Security-conscious firms are also more than twice as likely to be able to stop a push to production for a moderate security vulnerability, meaning their customers aren’t able to release insecure code.

The most marked change was with regards to overall security posture. More than four in five (82%) of those polled with the highest levels of security integration said their security practices ‘significantly’ improved their posture, compared with only 38% of those with no integration.

In some aspects, the figures between the haves and the have-nots are not as broad as they seem. This may be of particular interest due to the harsh journey involved. Getting seamless security integration is a multi-layered problem. As the report puts it: “You see the underlying complexity that’s been masked over by years of duct tape and glue. You tackle the roadblock, but as you resolve it, new obstacles appear. You resolve one roadblock after another, and it gets frustrating, but after a while, you see that your team can overcome issues as they arise.” 

Last year, the key takeaway was with regards to getting each step right. The 2018 Puppet report argued reaching the zenith, where Dev and Ops integrate seamlessly and in perfect harmony, meant a slow evolution. Only one in 10 organisations polled were outliers either way, with 79% of companies somewhere in the middle.

With regard to security, those at the more advanced end of DevOps implementation are automating security policy configurations, and at the very sharp end exploring automated incident response. “They had cultivated a powerful blend of high-trust environments, autonomous teams, and a high degree of automation and cross-functional collaboration between application teams, operations and security teams.

“The result? Security becomes a shared responsibility across delivery teams that are empowered to make security improvements.”

Ultimately, it is a long road, but a profitable one if all stakeholders care enough, which is rather like security as a whole. “The DevOps principles that drive positive outcomes for software development – culture, automation, measurement and sharing – are the same principles that drive positive security outcomes,” said Alanna Brown, senior director of developer relations at Puppet and report author. “Organisations that are serious about improving their security practices and posture should start by adopting DevOps practices.” in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Almost £9m wasted on under-utilised cloud tech every year

Bobby Hellard

25 Sep, 2019

European businesses are wasting £8.8 million every year on unused cloud services, according to research.

The findings also suggested that a large number of businesses feel IT is being set up to fail as it handles both transformation projects and legacy tech.

The research was conducted by research firm Coleman Parkes, on behalf of the Insight Intelligent Technology Index (ITI) and featured 1,000 IT decision-makers at companies with 500 employees or more.

In Europe, organisations are spending £29.48 million on cloud computing, but the survey suggests 30% of that is being wasted.

“Under-utilised technology has been a problem for decades, so it’s not surprising to see the problem spread to the cloud,” said Wolfgang Ebermann, president of Insight EMEA. “However, by putting the right controls in place, organisations can optimise cloud consumption and ensure they only pay for services they are using.”

The research highlighted that investment in digital innovation is increasing. Enterprises invested an average of £32.23 million on digital innovation in the last 24 months, and plan to invest £42.12 million in the next two years.

However, 66% of respondents said they feel that IT is being set up to fail as it takes on more responsibility for transformational projects, while still keeping core systems running effectively – which is an increase from the 57% that thought so in 2018.

According to the report, unless there is a change in corporate culture and responsibility for digital innovation is truly shared across the business, this trend is likely to increase.

Boardroom demands to deliver digital projects has placed a lot of pressure on IT teams, particularly to keep costs and security under control. When asked to choose their top three challenges around digital innovation, 46% of respondents chose monthly costs, such as operational expenditure, 44% selecting upfront costs like capital expenditure and 38% pointed to insufficient budgets. Similarly, 60% said security is the main factor that keeps them up at night with 68% saying it’s the biggest challenge in globally managing IT operations.

“The strategic importance of IT as a key enabler for future business success is clearly becoming more understood at board level,” continued Wolfgang Ebermann.

“The role of the CIO is clearly evolving from managing IT to business partner. They have become the ‘digital transformation change agent’ and a core member of the executive board. Yet the CIO and IT cannot solely be held responsible for digital innovation; the entire business has a role to play.”