Todas las entradas hechas por Bobby Hellard

Slack has started blocking users who visited US sanctioned countries


Bobby Hellard

21 Dec, 2018

Communication service Slack is reportedly blocking users with ties to countries that are under sanction by the US government, with immediate effect and no chance of appeal.

Slack said the ban is in response to obligations to US regulations and is aimed at users that have visited countries under US sanctions, such as Iran, Cuba and North Korea.

However, some users claim bans have been made in error as they haven’t visited the listed nations in recent years.

A number of users have taken to Twitter to question the company’s reasoning and some have even posted screenshots of the messages Slack have sent them explaining why they’ve been blocked.

«In order to comply with export control and economic sanctions laws and regulations promulgated by the U.S. Department of Commerce and the U.S. Department of Treasury, Slack prohibits unauthorised use of its products and services in certain sanctioned countries and regions including Cuba, Iran, North Korea, Syria, and the Crimea region of Ukraine,» said Slack in a message to banned software developer Amir Omidi.

«We’ve identified your team/account as originating from one of those countries and are closing the account effective immediately.»

Underneath the screenshot, Omidi explained that the immediate ban was also undisputable because he couldn’t appeal: «So @SlackHQ decided to send me this email. No way to appeal this decision. No way to prove that I’m not living in Iran and not working with Iranians on slack. Nope. Just hello we’re banning your account,» he Tweeted.

How Slack has determined who to ban is coming under scrutiny with users questioning how they know if they’ve visited any of the sanctioned nations or how it knows what their ethnicity is. A PhD student from Vancouver, Canada said he received the ban despite having no Slack contacts in Iran.

«Slack closed my account today! I’m a PhD student in Canada with no teammates from Iran! Is Slack shutting down accounts of those ethnically associated with Iran?! And what’s their source of info on my ethnicity?» he tweeted. 

A company representative told The Verge, that the deactivations were a result of an upgrade to Slack’s geolocation system. 

«We updated our system for applying geolocation information, which relies on IP addresses, and that led to the deactivations for accounts tied to embargoed countries,» the representative said. «We only utilize IP addresses to take these actions. We do not possess information about nationality or the ethnicity of our users.

«If users think we’ve made a mistake in blocking their access, please reach out to feedback@slack.com and we’ll review as soon as possible.»

The team at SlackHQ did eventually get back to Omidi, but as yet have not resolved his issue: «Still no response at all and its the end of the workday in eastern US. I am surprised how long it takes them to reverse a ban or to issue some sort of statement on this,» he tweeted.

Google Cloud creates works of art using big data


Bobby Hellard

19 Dec, 2018

Creative minds at Google Cloud have come up with a way to make data storage more interesting by visualising storage traffic data to create stunning works of art.

In collaboration with Staman Design, a data visualisation design studio, the two companies used the trajectory, velocity and density of data moving around the globe to create virtual maps.

«Looking at Cloud Storage requests over time showed us a distinct pattern, the pattern gave us a way to correlate countries, and each correlation gave us an insight into connections around the globe,» said Chris Talbott, Google Cloud’s head of cloud storage product marketing.

«So we put it all together in a video that gave every country a turn in the spotlight. It jumps from country to correlated country, showing unexpected connections and prompting conversation and discussion.»

Most of the art created by the request data has been on display as Google Cloud’s Next events in San Francisco, Tokyo or London. The idea was to create a global picture of its service that highlighted patterns that would help the company better serve its customers.

But, as Talbott put it «somewhat jokingly», Google wondered if it could make boring old storage beautiful. The answer was yes, as it’s managed to paint wonderfully vivid pictures using the data.

The process began by looking at cloud storage data, requested by customers. This data charted a request from it’s country of origin to the relevant cloud region, and vice versa. The team took one weeks’ worth of storage data and searched for useful patterns for customers. The information detailed the direction of the data, but not who it belonged too.

Visualised data migration from around the world – courtesy of Google Cloud

«The associated data also tells us the size of the request in GBs and a timestamp,» explained Talbott. «Since the data is anonymized, we don’t know which user is making the request, whose data is being requested or what the content is.»

«You can make storage beautiful when you look at it in different ways,» he said, «and in doing so you can really generate some thought-provoking insights for your customers.»

Oracle files lawsuit over $10bn Pentagon cloud contract


Bobby Hellard

13 Dec, 2018

Oracle has again launched legal proceedings over the US Pentagon’s single-vendor cloud contract, filing a suit against the Department of Defence in the US Court of Federal Claims.

The legacy database business has already had legal action dismissed by the Government Accountability Office, however, a redacted version of the company’s latest complaint, published this week, shows the company is not backing down.

However, the GAO maintains that the single vendor approach does not violate any laws and that for issues of national security the process is in the government’s best interests.

The contract, known as the Joint Enterprise Defense Infrastructure (JEDI) cloud, involves the migration of defence department data to a commercially operated cloud system. However, because the contract is only on offer to a single winning bidder, Oracle says it’s illegal and out of sync with the industry.

The company’s senior VP Ken Glueck told TechCrunch: «The technology industry is innovating around next-generation cloud at an unprecedented pace and JEDI as currently envisioned virtually assures DoD will be locked into legacy cloud for a decade or more. The single-award approach is contrary to well-established procurement requirements and is out of sync with the industry’s multi-cloud strategy, which promotes constant competition, fosters rapid innovation and lowers prices.»

This is a $10bn contract that involves almost 80% of the Department of Defence’s IT systems being migrated to the cloud that could last 10 years. Other cloud providers such as IBM and Google have taken issue with it being made available to just one vendor, with the latter having pulled out of the process entirely.

«We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles,» a Google spokesman said in a statement. «And second, we determined that there were portions of the contract that were out of scope with our current government certifications.»

IBM issued a similar legal challenge against the single-vendor process, which is said violate procurement regulations, but that case was also blocked.

Amazon Web Services is thought to be the front-runner for the contract, which is an outcome that Oracle will not like given the current mudslinging between the two. During his AWS re:Invent last month, CEO Andy Jassy announced that the company would be off Oracle databases by the end of 2019.

Kubernetes flaw could allow hackers gain dangerous admin privileges if left unpatched


Bobby Hellard

5 Dec, 2018

A serious vulnerability in the Kubernetes could enable an attacker to gain full administrator privileges over the open source container system’s compute nodes. 

The bug, CVE-2018-1002105, is a privilege escalation flaw in RedHat’s OpenShift open source Kubernete’s platform and was spotted by Darren Shephard, founder of Rancher Labs. 

The flaw effectively allows hackers to gain full administrator privileges on Kubernetes compute nodes, the physical and virtual machines on which Kubernetes containers run upon. 

Once those privileges have been gained, hackers can then steal data, input corrupt code or even delete applications and workloads. 

The flaw can be exploited in two ways: one through a ‘normal’ user gaining elevated priveledges over a Kubernetes pod, which is a group of one or more containers that share network and storage resources and run in a shared context, and from there they could wreak havoc.  

The second involves the exploitation of API extensions that connect a Kubernetes application server to a backend server. While a hacker will need to create a tailoured network request to hanress the vulnerability within this context, once done they could send requests over the network conection to the backend of the OpenShift deployment.

From there attacker has ‘cluster-level’ admin privileges – clusters are a collection of nodes – and therefore escalated privileges on any node. This would allow said attacker to alter existing brokered services to deploy malicious code. 

Due to the connection on Kubernetes API servers, where its authenticated with security credentials, malicious connections and unauthenticated users with admin privileges appear above board. This makes the flaw and its exploitation difficult to identify as would-be hackers appear as authorised users. 

According to GitHub, this makes the vulnerability a critical flaw, mostly due to the fact it allows anyone with access to cause damages, but also because of its invisibility as abusing the flaw leaves no traces within system logs. 

«There is no simple way to detect whether this vulnerability has been used. Because the unauthorized requests are made over an established connection, they do not appear in the Kubernetes API server audit logs or server log. The requests do appear in the kubelet or aggregated API server logs, but are indistinguishable from correctly authorized and proxied requests via the Kubernetes API server.»

There are fixes and remedies to this flaw, but it’s mostly about upgrading the version of Kubernetes you run. Now. There is patched versions of Kubernetes, such as v1.10.11, v1.11.5, v1.12.3, and v1.13.0-rc.1 and it is recommended that you stop using Kubernetes v1.0.x-1.9.x.

For those that cannot move up, there are cures; you must suspend use of aggregated API servers and remove pod permissions from users that should not have full access to the kubelet API.

IBM chosen to push cloud-native banking platform


Bobby Hellard

4 Dec, 2018

UK software firm Thought Machine has chosen IBM to accelerate the implementation of its cloud-native banking platform.

The system is called Vault and Thought Machine said it was created to give traditional banks with legacy systems and business constraints a platform to meet the demands of modern banking. The partnership has already had some success as Lloyds Banking Group has begun exploring the Vault platform.

The cloud-native core banking system allows banks to fully realise the benefits of IBM Cloud and has been developed to be highly flexible giving banks the ability to quickly add new products, accommodate shifts in a bank’s strategy or react to external changes in the market, according to Thought Machine. 

«Most banks are constrained by their legacy core banking systems. Vault offers the first core banking platform built in the cloud from the ground up, massively scalable and with the flexibility to create and launch new products and services in days versus months, at unprecedented levels of cost and speed,» said Jesus Mantas, managing partner of IBM Global Business Services

«Our relationship with Thought Machine enables us to provide a lower risk option for banks to reduce the cost of operations and increase customer service and speed to market.»

This alliance between Thought Machine and IBM Services will see the creation of a global practice headquartered out of London which the pair hope will bring together banking transformation and implementation expertise from existing consultants along with new hires to support the demand from banks for the transformation of their core infrastructure.

«Building the new foundations for banking is Thought Machine’s mission and Vault is Thought Machine’s next-generation core banking platform. This specially designed cloud-native new platform has been written to bring a modern alternative to current platforms that many banks around the world are struggling to maintain,» said Paul Tayler CEO Thought Machine.

IBM has plenty of experience with banks undergoing digital transformation and cloud migrations, with tech giant being called in to help TSB as it endured an almost endless nightmare after a failed attempt to leave its legacy IT infrastructure behind.

View from the airport: AWS re:Invent 2018


Bobby Hellard

30 Nov, 2018

Go big or go home, is a slogan for Guy Fieri’s Vegas kitchen, which is just around the corner from the Venetian hotel where AWS re:Invent had taken up a residency for the week.

The cloud giant’s annual conference certainly did just that, taking over the famous Las Vegas strip where over 50,000 people flooded the MGM Grand, the Bellagio, the Venetian and more for the latest cloud computing innovations from AWS.

And the announcements came thick and fast and left most visitors excited and overwhelmed.

There were surprising announcements for blockchain services, satellite geospatial data innovations, machine learning capabilities and plenty more. Partners like Formula 1, Fender and GuardianLife took to the stage to talk up AWS services like SageMaker and AWS Greengrass. The scale of the conference matched that of the city which hosted it, challenging you to experience as much as possible.

«The first day I walk 13,265 steps, which equates to 6.5 miles, so that gives you a sense of getting around to see all the customers,» said Philip Moyer, the global director of financial services at AWS.

«One of the things you probably saw in every single session, there was a customer talking about what they are doing. That is really important for us to have the customer on stage. And, the customer loves to be there as well. It gives great validation for the type of innovation they do.»

Customer innovation was a running theme for the event, where AWS when to a great effort to stress the importance of letting its partners build and develop on top of its products. Gavin Jackson, the managing director of AWS for the UK and Ireland put it best when he said that some people want a Lego product that has instructions to build a specific thing, whereas others will just want Lego bricks to build something from their own imaginations.

«We really try to focus on the builders and really try to make it tangible,» added Moyer. «The business side, the deep technical side, the security side, for me it’s really exciting we get to see people faces after they’ve said: ‘we’ve been asking for that and you’ve delivered’. So, in that aspect it’s almost like Christmas day for us, unwrapping all these presents for our customers. So that part is really exciting for us.»

Perhaps the best present of all was saved for the very end, as superstar DJ Skrillex headlined the AWS:Play closing party. The DJ’s appearance was announced by CTO Werner Vogels, during his keynote on Thursday.

The AWS: Play after party

According to Vogels, Skrillex requested to be there. Having first headlined the event back in 2014, the DJ was keen to return for a night of fun. Clearly, he knew, as did the 50,000 plus guests, there’s no place quite like Vegas and, arguably, no cloud event quite like re:Invent.

AWS re:Invent: A blockchain service for the right market at the right time


Bobby Hellard

30 Nov, 2018

The announcement of blockchain services at AWS re:Invent came as a surprise, but the cloud giant believes it has the right product for the right market at the right time.

Amazon QLDB, a cryptographically verifiable ledger and the Amazon Managed Blockchain, which is a fully managed blockchain service, were unveiled by CEO Andy Jassy during his keynote speech. The company hadn’t shown much interest in the technology before and Jassy said it was because «it hadn’t seen any examples in production that couldn’t be solved by a database».

Looking at blockchain itself, which is formed from a perpetual list of records, called blocks, linked using cryptography and contains timestamps and transaction data, there are not yet masses of innovative use cases and its original purpose, underpinning cryptocurrency, is not the best example of it as Bitcoin continues to sink.

Then there’s the empty hype, such as the Hdac advert that played out during this year’s world cup, which offered smart home technology powered by the magic of blockchain, without much explanation of how it actually worked.

But according to Philip Moyer, the global director of financial services at AWS, the Amazon QLDB and the Amazon Managed Blockchain service are not empty products following tech trends or just blockchain for the sake of it, they’re what AWS customers have asked for.

«A lot of people would say we are late to the game,» he said. «But we actually think we are finding the right product for the right market at the right time.

«Over 90% of our roadmap, is driven by what customers ask us to do. That’s a really important aspect, we don’t just build science projects, we’re really building the things the customers ask us for.»

As was evident at re:Invent, AWS works with many financial organisations, such as DTDC, insurance firm Guardian Life and Australian National Bank, which announced a long-term cloud partnership with AWS. For Moyer, these large financial organisations supported its work with blockchain.

«They were really excited,» he said. «They offered loads of support, as did Guardian for those announcements for blockchain-as-a-service and also for QLDB.»

«When you deal with very high-value transactions like the financial industry does, having the veracity of that transaction occurred and being able to have traceability of it, especially if you’re a large scale, highly distributed bank around the world, if somebody puts in a credit into your bank account in one place and someone makes a transaction in another place at the precise same time, to be able to resolve those things, QLDB is a really exciting advancement for the financial industry.»

AWS Re: Invent: AWS adds more programme languages to Lambda


Bobby Hellard

29 Nov, 2018

AWS is giving developers the choice to integrate their prefered programming languages into Lambda Runtime API and Lambda Layers.

These two new AWS Lambda features enable developers to build custom runtimes and share and manage common code between functions.

Making the announcements on stage in Las Vegas at the cloud giant’s re:Invent conference, CTO Werner Vogels told the crowd: «You asked for it, so we’ve given it to you.»

It turns out, what they wanted was more options with Lambda, more flexibility to use the code they are au fait with. AWS Lambda is an event-driven serverless computing platform the company launched in 2014. It was designed to simplify the building of smaller, on-demand applications that are responsive to events and new information.

Up until now, the platform only supported some programming languages, such as Node.js, Python, Java, Go and NET Core, which had previously limited developers with other language preferences.

The Runtime API for AWS Lambda defines a standardised HTTP-based specification which codifies how Lambda and a function’s runtime communicate. It enables users to build custom runtimes that integrate with Lambda to execute functions in response to events. With the Runtime API, AWS said that developers can use binaries or shell scripts, and their own choice of programming languages and language versions within the Lambda tools.

«We decided to change course and give you the ability to start bringing your own language to Lamda,» said Vogels. «We are launching today, custom runtimes for Landa, where you can bring your own execution environment.

«Now there is no limitation anymore for what kind of language you can use to do serverless development in.»

Lambda functions in a serverless application typically share common dependencies such as SDKs, frameworks, and now runtimes. With layers, AWS said users can centrally manage common components across multiple functions enabling better code reuse.

This announcement swiftly followed news that Ruby, the Japanese object-orientated, general purpose programme language has been made available on AWS Lambda functions.

AWS re:Invent: Andy Jassy announces ML Marketplace, Blockchain and more


Bobby Hellard

29 Nov, 2018

At AWS re:Invent on Wednesday, Andy Jassy said that he had «a few things to share». But, over the course of his two-hour keynote, the CEO announced a barrage of new services and capabilities from blockchain to machine learning.

The boss of the world’s biggest cloud computing company has had a busy few days at the annual event in Las Vegas. From making announcements to meeting many of the developers and partners that have flocked to Sin City, Jassy has put himself about and offered plenty of information on everything he’s revealed.

And, there was a ridiculous amount of them…

Machine Learning Marketplace

Available now, the AWS Marketplace for Machine Learning includes over 150 algorithms and models, with more, said to be coming every day, that can be deployed directly to Amazon SageMaker. Its a giant algorithm hub for developers to find and offer machine learning models for the benefit of all.

For Gavin Jackson, the managing director of AWS UK and Ireland, this was the biggest news of the day and also a very good example of an underlying theme of this year’s re:Invent. It’s about catering to both those who can and those who can’t.

«The big announcement, I thought, was the Machine Learning Marketplace,» said Jackson. «Because while SageMaker is a good use of existing training models that you can just plug straight into your application, customers who are building their own training models and algorithms for applications can just look at a much wider s

et of use cases that are available in the marketplace and then just plug them in so they don’t have to build them for themselves.

«At the same time, those that do have data scientists and are building their own algorithms and training models can plug them into the marketplace and monetise it. It’s kind of a marketplace for those that can and those that can’t and everybody wins in the end. It just accelerates the progress of machine learning artificial intelligence over time.»

Blockchain

Unexpectedly, the CEO announced two new services to help companies manage business transactions for blockchain, starting with Amazon Managed Blockchain. Jassy said that this new service makes it easy to create and manage scalable blockchain networks using the popular, open source Ethereum and Hyperledger Fabric frameworks.

It’s run from an AWS Management Console, where customers can set up a blockchain network that can span multiple AWS accounts and scale to support thousands of applications and millions of transactions

The second blockchain offering, Amazon QLDB, is a transparent, immutable, and cryptographically verifiable ledger for applications that need a central, trusted authority to provide a permanent and complete record of transactions, such as supply chain, financial, manufacturing, insurance, and HR. This option is for customers who want to build applications where multiple parties can execute transactions without the need for a trusted, central authority.

According to Jassy, the company was asked why they had not shown any previous interest in blockchain, despite many of its customers and partners using the technology.

“We just hadn’t seen that many blockchain examples in production or that couldn’t easily be solved by database,” said Jassy. “People just assumed that meant we didn’t think blockchain was important or that we wouldn’t build a blockchain service. We just didn’t understand what the real customer need was.”

Data

Also announced during the keynote were new services for automating data applications and detailed guidance to help customers build faster on AWS services.

The AWS Control Tower is a cloud interface that allows users to govern multiple AWS workloads, particularly for companies migrating to the cloud. Jassy said it offers pre-packaged governance rules for security, operations, and compliance, which customers can apply enterprise-wide or to groups of accounts to enforce policies or detect violations.

Jumping on the data lake bandwagon, the company is now offering AWS Lake Formation, which will run on Amazon S3. Data lakes are storage systems that source data from multiple locations and stores it in files for technologies like machine learning. The AWS version is said to automate and optimise data, reducing the data management burden for customers.

Hybrid

There was some noise before the event that AWS would address hybrid cloud systems and it has confirmed AWS Outposts, which is a fully managed and configurable compute and storage racks service built with AWS-designed hardware. It’s a service that allows customers to run on-premise computing and storage functions while connecting to other AWS services in the cloud.

These outposts come in two variants; first, an extension of the VMware Cloud on AWS service that runs on AWS Outposts and second, AWS Outposts that allow customers to run on-premise ccomputing and storage that uses the same native AWS APIs used in the AWS cloud

AWS Ground Stations link satellites to the cloud


Bobby Hellard

28 Nov, 2018

Amazon Web Services (AWS) announced AWS Ground Station on Tuesday at its re:Invent conference in Las Vegas.

It’s a service that aims to feed satellite data straight into AWS cloud infrastructure faster, easier and at an affordable price for its customers.

There will be 12 of these ground stations located around the world and they’re effectively antennas that link up with satellites as they pass by while orbiting. This is a problem AWS customers have identified, according to the company, where satellites are only in the range of certain ground antennas briefly, making uploading and downloading data difficult.

The announcement was made by AWS CEO Andy Jassy, who cited customers as the inspiration and called the service «the first fully managed global ground station service».

«Customers said, ‘look, there’s so much data in space and so many applications that want to use that data to completely change the way in which we interact with our planet and world,» he said. «Why don’t we just make this easier?»

There is nothing easy about dealing with satellites, particularly for transferring data.

As they’re only in range for limited periods, linking up is very challenging and the data itself needs some kind of infrastructure to be stored, processed and utilised within. From top to bottom, it’s an operation that requires a lot of resources, such as land and hardware, which is extremely expensive. Thankfully, the world’s biggest cloud provider has stepped in to find a solution.

So, how does it work? According to AWS, customers can figure out which ground station they want to interact with and identify the satellite they want to connect with. Then, they can schedule a contact time; the exact time they want the satellite to interact with the chosen ground station as it passes by.

Each AWS Ground Stations will be fitted with multiple antennas to simultaneously download and upload data through an Amazon Virtual Private Cloud – directly feeding it into the customer’s AWS infrastructure.

«Instead of the old norm where it took hours, or sometimes days, to get data to the infrastructure to process it,» added Jassy. «It’s right there in the region in seconds. A real game changer for linking with satellites.»

While AWS provides the cloud computing, the antennas themselves have come from its partnership with Lockheed Martin, which has developed a network of antennas called Verge. Where AWS offer the powers to process and store, Verge promises a resilient link for the data to travel to.

«Our collaboration with AWS allows us to deliver robust ground communications that will unlock new benefits for environmental research, scientific studies, security operations, and real-time news media,» said Ric Ambrose, executive VP of Lockheed Martin.

«In time, with satellites built to take full advantage of the distributed Verge network, AWS and Lockheed Martin expect to see customers develop surprising new capabilities using the service.»