Archivo de la categoría: server

Are cyber attacks covering up server inadequacies at Pokémon Go?

Pokemon GO 2Pokémon Go users have continued to struggle as the app’s developer Niantic Labs recovers from hacker attacks and unprecedented demand for the game, reports Telecoms.com.

Claimed attacks from various hacker groups would have appeared to cover up server inadequacies at Niantec Labs, as the team seemingly struggles to meet capacity demands following the games launch in 27 countries worldwide.

Over the course of the weekend, various hacker groups including PoodleCorp and OurMine have claimed responsibility for a distributed denial of service (DDoS) attack, causing a slow and clunky experience for many players around the world. Although the Niantec Labs team has played down the incidents, disruptions have continued into Monday morning with the Telecoms.com editorial team unable to access the game effectively. Whether this can be attributed to the claimed attacks or a lack of server capacity is unclear for the moment.

The hacker saga would have appeared to have started over the weekend, with OurMine stating on its website, “Today We will attack “Pokemon Go” Login Servers! so no one will be able to play this game till Pokemon Go contact us on our website to teach them how to protect it! We will attack it after 3-4 hours! Be ready! We will update you!” This was followed by another statement declaring the servers were down. PoodleCorp claimed the day before (June 16), it had caused an outage, though also said to expect a larger attack in the near future.

While both of these attacks have attracted headlines, it would also appear to have covered up shortcomings on the company’s infrastructure and its ability to deal with high demand. The launch of Pokémon Go has been well documented over the last few weeks as it has been lauded by numerous sources as the biggest mobile game in US history. Even before its official release in the UK, EE announced it saw 350,000 unique users of Pokémon GO on its network.

“This is the fastest take up of an app or game we’ve ever seen – and that’s before it’s officially launched! People across the country are going to be relying on a mobile data network that’s everywhere they go,” said Matt Stagg, EE head of video and content strategy.

Despite claims the server problems have been addressed, complaints have continued to be voiced. Server status tracking website Downdetector stated 39,013 complaints were registered at 22.00 (EST) on July 17. The Niantic Labs team are seemingly underestimating demand for Pokémon Go with each launch, which would be a nice problem to have.

While Telecoms.com was unable to identify Niantic Labs specific cloud set-up, other reports have identified Google as the chosen platform. Although there are no specific announcements linking the two organizations, Niantec was spun out of Google in October last year, and currently has John Hanke at the helm, who was previous VP of Product Management for Google’s Geo division, which includes Google Earth, Google Maps and StreetView. A job vacancy is also on the company’s website which asks for experience in dealing with Google Cloud or AWS.

Although AWS has been listed on the job vacancy, it would be fair to assume it is not involved currently as CTO Werner Vogels couldn’t resist making a joke at the affair stating “Dear cool folks at @NianticLabs please let us know if there is anything we can do to help!” on his twitter account. This could imply some insider knowledge from Vogels as it would be most likely the company would take a swipe at its closest rivals in the public cloud market segment, namely Google or Microsoft Azure.

The claims of the DDoS attacks would appear to have come at an adequate time, as it has taken the heat off the cloud infrastructure inadequacies. According to Business Insider, Hanke said the international roll-out of the game would be “paused until we’re comfortable”, with relation to the server capacity issues. It would seem the company is prepared to ride the wave of demand, as well as complaints, and fix the server problem later, as launches and server issues continued following that interview.

Microsoft announces general availability of SQL Server 2016

Microsoft1Microsoft has announced the SQL Server 2016 will hit general availability to all customers worldwide as of June 1.

The SQL Server, which is recognized in Gartner Magic Quadrants for Operational Database, Business Intelligence, Data Warehouse, and Advanced Analytics, will be available through four editions, Enterprise, Standard, Express and Developer. The team also announced it would move customer’s Oracle databases to SQL Server free with software assurance.

“SQL Server 2016 is the foundation of Microsoft’s data strategy, encompassing innovations that transform data into intelligent action,” said Tiffany Wissner, Senior Director of Data Platform Marketing at Microsoft. “With this new release, Microsoft is delivering an end-to-end data management and business analytics solution with mission critical intelligence for your most demanding applications as well as insights on your data on any device.”

Features for the SQL include mission critical intelligent applications delivering real-time operational intelligence, enterprise scale data warehousing, new Always Encrypted technology, business intelligence solutions on mobile devices, new big data solutions that require combining relational data and new Stretch Database technology for hybrid cloud environments.

“With this new innovation, SQL Server 2016 is the first born-in-the-cloud database, where features such as Always Encrypted and Role Level Security were first validated in Azure SQL Database by hundreds of thousands of customers and billions of queries,” said Wissner.

Last month, the team announced the team also announced it was bringing the SQL Server to Linux, enabling SQL Server to deliver a consistent data platform across Windows and Linux, as well as on-premises and cloud. This move seemingly surprised some corners of the industry by moving away from its tradition of creating business software that runs only on the Windows operating system. The news continues Chief Executive Satya Nadella’s strategy of making Microsoft a more open and collaborative organization.

Rackspace updates OpenStack-powered cloud server, OnMetal

Cisco and IBM are teaming up on converged hardware solutions

Rackspace has updated its OpenStack-powered cloud server, OnMetal, focusing its new features on building connectivity between public cloud and dedicated hardware.

The company highlighted it delivers enhanced compute power, and is designed for customers aiming to run workloads such as Cassandra, Docker and Spark, which require intensive data processing as well as the ability to quickly scale and deploy.

“With the combination of new features and performance capabilities in the next generation of OnMetal, it can be a solution for many customers seeking OpenStack as the platform to run their most demanding workloads,” said Paul Voccio, VP Software Development at Rackspace.

The new servers, designed from Open Compute Project specs, feature the Intel Xeon E5-2600 v3 processors, and build on Rackspace’s journey to lead the OpenStack market. Last month, Rackspace added an OpenStack-as-a-Service option, in partnership with Red Hat, to its proposition while highlighting its ambitions “to deliver the most reliable and easy-to-use OpenStack private and hybrid clouds in the world.”

Rackspace claims app performance and reliability indicators are increased with OnMetal cloud servers. The bare metal offering, generally associated with increased security, has helped its customer Brigade avoid performance limitations common with virtualized environments.

“OnMetal has played a significant role in our ability to deliver the Brigade app with optimal uptime, and to innovate and grow the application with the performance of a dedicated environment,” said John Thrall, CTO of Brigade.

Microsoft strengthens cloud offering by bringing SQL Server to Linux

Microsoft1Microsoft is bringing its SQL Server to Linux, enabling SQL Server to deliver a consistent data platform across Windows and Linux, as well as on-premises and cloud.

The move has surprised some corners of the industry, as Microsoft moves away from its tradition of creating business software that runs only on the Windows operating system. It has historically been difficult to manage certain Microsoft products on anything other than a Windows server.

Microsoft has always sold PC software which can be run on competitor’s machines, though Chief Executive Satya Nadella broadened the horizons of the business upon appointment through a number of different initiatives. One of the most notable moves was decoupling Microsoft’s Azure cloud computing system from Windows and this weeks’ announcement seems to continue the trend.

The news has been lauded by most as an astute move, strengthening Microsoft’s position in the market. According to Gartner, the number of Linux servers shipped increased from 3.6 million in 2014 from 2.4 million in 2011. Microsoft in the same period saw its shipments drop from 6.5 million to 6.2 million. The move opens up a new wave of potential customers for Microsoft and reduces concerns of lock-in situations.

Microsoft EVP, Cloud and Enterprise Group, Scott Guthrie commented on the company’s official blog “SQL Server on Linux will provide customers with even more flexibility in their data solution,” he said “One with mission-critical performance, industry-leading TCO, best-in-class security, and hybrid cloud innovations – like Stretch Database which lets customers access their data on-premises and in the cloud whenever they want at low cost – all built in. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017.”

The announcement also detailed a number of key features for SQL Server 2016, focused around the critical avenues of data and security. Security encryption capabilities that enable data to always be encrypted at rest, in motion and in-memory are one of the USPs, building on Microsoft’s marketing messages over the last 12 months.

Furthering efforts to diversify the business, Microsoft announced that it would be acquiring mobile app development platform provider Xamarin, last week.

Incorporating Xamarin into the Microsoft business will enhance its base of developer tools and services, once again building on the theme of broadening market appeal and opening new customer avenues for the tech giant.

Qualcomm and Guizhou to make new server chipsets in China

qualcomm sales officeSan Diego based chip maker Qualcomm and China’s Guizhou Huaxintong Semi-Conductor company have announced a joint venture to develop new server chip sets designed for the Chinese market.

The news comes only a week after chip maker AMD announced its new Opteron A1100 System-on-Chip (SoC) for ARM-based systems in data centre. Both partnerships reflect how server design for data centres is evolving to suit the cloud industry.

The Qualcomm partnership, announced on its web site, was formalised at China National Convention Center in Beijing as officials from both companies and the People’s Government of Guizhou Province signed a strategic cooperation agreement. The $280 million joint venture will be 55% owned by the Guizhou provincial government’s investment arm, while 45% will belong to Qualcomm subsidiary.

The plan is to develop advanced server chipsets in China, which is now the world’s second largest market for server technology sales.

The action is an important step for Qualcomm as it looks to deepen its level of cooperation and investment in China, said Qualcomm president Derek Aberle. In February 2015 BCN sister publication Telecoms.com reported how the chip giant had fallen foul of the Chinese authorities for violating China’s trading laws. It was fined 6 billion yuan (around $1 billion) after its marketing strategy was judged to be against the nation’s anti-monopoly law.

“The strategic cooperation with Guizhou represents a significant increase in our collaboration in China,” said Aberle. Qualcomm is to provide investment capital, license its server technology to the joint venture, help with research and development and provide implementation expertise. “This underscores our commitment as a strategic partner in China,” said Aberle.

Last week, AMD claimed the launch of its new Opteron A1100 SoC will catalyse a much more rapid development process for creating servers suited to hosting cloud computing in data centres.

AMD’s partner in chip development for servers, ARM, is better placed to create processors for the cloud market as it specialises in catering for a wider diversity of needs. Whereas Intel makes its own silicon and can only hope to ship 30 custom versions of its latest Xeon processor to large customers like Ebay or Amazon, ARM can licenses its designs to 300 third-party silicon vendors, each developing their own use case for different clients and variants of server workloads, it claimed.

“The ecosystem for ARM in the data centre is approaching an inflection point and the addition of AMD’s high-performance processor is another strong step forward for customers looking for a data centre-class ARM solution,” said Scott Aylor, AMD’s general manager of Enterprise Solutions.

Samsung unveils 128GB DDR4 memory modules for datacentres

Samsung 128GB RAMSamsung Electronics says it is mass producing memory modules for datacentre and enterprise servers that could turbo charge cloud services.

It has published details, in a blog of double data rate-4 (DDR4) memory in 128-gigabyte (GB) modules. These, when installed in enterprise servers and data centres, could significantly speed the rate of processing in cloud computing applications, slashing response times, boosting productivity and raising the quality of service.

The new modules use TSV (which stands for ‘through silicon via’), which is an advanced chip packaging technology that vertically connects DRAM chip dies using electrodes that penetrate the micron-thick dies through microscopic holes. Samsung first used this when it introduced its 3D TSV DDR4 DRAM (64GB) in 2014. TSV is used again in this new dual inline memory module (RDIMM) which, claims Samsung, opens the door for ultra-high capacity memory at the enterprise level.

The 128GB TSV DDR4 RDIMM is comprised of a total of 144 DDR4 chips, arranged into 36 4GB DRAM packages, each containing four 20-nanometer (nm)-based 8-gigabit (Gb) chips assembled with TSV packaging technology.

Unlike conventional chip packages, which interconnect die stacks with wire bonding, the TSV packages interconnect through hundreds of fine holes and vertically connected by electrodes passing through the holes. This creates a massive improvement in signal transmission speeds. In addition the Samsung’s 128GB TSV DDR4 module has a special data buffer function that improves module performance and lowers power consumption.

As a result servers can reach 2,400 megabits per second (Mbps), roughly twice their normal speed at half the power usage. Samsung says it’s now accelerating production of TSV technology to ramp up 20nm 8GB DRAM chips to improve manufacturing productivity.

“We will continue to expand our technical cooperation with global leaders in servers, consumer electronics and emerging markets,” said Joo Sun Choi, executive vice president of Memory Sales and Marketing at Samsung Electronics.

AWS launches EC2 Dedicated Hosts feature to identify specific servers used

amazon awsAmazon Web Services (AWS) has launched a new service for the nervous server hugger: it gives users knowledge of the exact server that will be running their machines and also includes management features to prevent licensing costs escalating.

The new EC2 Dedicated Hosts service was created by AWS in reaction to the sense of unease that users experience when they never really know where their virtual machines (VMs) are running.

Announcing the new service on the company blog AWS chief evangelist Jeff Barr says the four main areas of improvement would be in licensing savings, compliance, usage tracking and better control over instances (AKA virtual machines).

The Dedicated Hosts (DH) service will allow users to port their existing server-based licenses for Windows Server, SQL Server, SUSE Linux Enterprise Server and other products to the cloud. A feature of DH will be the ability to see the number of sockets and physical cores that are available to a customer before they invest in software licenses. This improves their chances of not overpaying. Similarly the Track Usage feature will help users monitor and manage their hardware and software inventor more thriftily. By using AWS Config to track the history of instances that are started and stopped on each of their Dedicated Hosts customers can verify usage against their licensing metrics, Barr says.

Another management improvement is created by the Control Instance Placement feature, that promises ‘fine-grained control’ over the placement of EC2 instances on each Dedicated Host.

The provision of a physical server may be the most welcome addition to many cloud buyers dogged by doubts over Compliance and Regulatory Requirements. “You can allocate Dedicated Hosts and use them to run applications on hardware that is fully dedicated to your use,” says Barr.

The service will help enterprises that have complicated portfolios of software licenses where prices are calculated on the numbers of CPU cores or sockets. However, Dedicated Hosts can only run in tandem with AWS’ Virtual Private Cloud (VPC) service and can’t work with its Auto Scaling tool yet.

Oracle and Intel announce plans to ramp up the offensive on IBM in the cloud

Oracle openworld 2015Intel and Oracle are to build on a previous collaboration which saw them jointly take on IBM in the cloud computing hardware market. Now they are conspiring again, this time to target Oracle’s database and software customers, in a bid to get them to ditch their IBM computer servers and buy Oracle/Intel servers instead.

The new pact was announced at the opening of Oracle’s tech conference as Intel CEO Brian Krzanich took the stage of Sunday with Oracle CEO Mark Hurd. Project Apollo, in which the two manufacturers pooled engineers in a joint bid to investigate how massive cloud computing data centres can run faster using Oracle hardware with Intel chips, was pronounced mission accomplished.

On Sunday Hurd and Krzanich announced the new hardware partnership and a back up conversion programme. Oracle CEO Mark Hurd said ‘thousands’ of customers have dropped IBM for Oracle when running Oracle software. To back this up, Oracle launched a migration support programme. The ‘Exa Your Power Program’ (EYP) is aimed to help customers move their Oracle Database from IBM Power systems to Oracle Engineered Systems using Intel technology.

The EYP is a free database migration Proof of Concept study in which Oracle will assess a customer’s environment, create a database migration results report and show how it thinks the customer could significantly cut the time and costs of running critical database workloads.

“CSC has successfully migrated dozens of customers’ enterprise workloads,” said Ashish Mahadwar, Executive General Manager of Oracle’s Emerging Business Group. “We recently migrated an Oracle Database for a major insurance provider from IBM Power 7 to an Exadata X5 engineered system as a Proof of Concept.”

Mahadwar claimed that test results showed a Siebel Application runs four-to-ten times faster and ETL Processes running up to 12-times faster on Exadata.

Transformation of the enterprise is already underway with the continuous improvements in a vast software ecosystem that Intel and Oracle jointly deliver according to Mahadwar. “The Exa Your Power program will make it easier for customers to realize the benefits of moving to Intel architecture,” said Mahadwar.

Quanta intros Intel RSA Open Compute proof of concept

Quanta is mashing up Intel's RSA and Open Compute designs

Quanta is mashing up Intel’s RSA and Open Compute designs

Taiwanese datacentre vendor Quanta has introduced an Intel Rack Scale Architecture (Intel RSA) proof of concept rack solution based on Open Compute specifications which the company is pitching at hyperscale datacentre operators and cloud providers.

Intel RSA is the chip vendor’s own modular architecture design that disaggregates compute, storage and networking and weaves them together in a fabric it claims makes resources easier to pool and pod.

Now Quanta has developed a proof of concept for a server that blends Intel’s RSA specs and Open Compute designs.

The hardware vendor, which already offers hardware based on Open Compute designs, claims will significantly reduce datacentre energy consumption and costs, reduce vendor lock-in and ease management and maintenance.

“Datacentres face significant challenges to efficiency, flexibility and agility,” said Mike Yang, general manager of QCT. “Working with Intel on the Intel RSA program, we have developed our product lineup based on Open Compute to give customers the utmost in efficiency and performance, supported by open standards.”

“In addition, we provide manageability from the chassis level and rack level, up to pod level, so customers can easily pool resources across these levels to support dynamic workloads,” Yang said.

ODMs like Quanta have gained strong share in the hyperscale datacentre space because of their cost competitiveness, and at the same time the Open Compute project, an open source hardware project founded by Facebook a few years back, seems to be gaining favour among large cloud providers. Facebook, IBM, HP and Rackspace are among some of the larger providers building out Open Compute-based services at reasonable scale.

Fujitsu, Red Hat partner on OpenStack-based private clouds

Red Hat and Fujitsu are partnering to develop OpenStack converged infrastructure solutions

Red Hat and Fujitsu are partnering to develop OpenStack converged infrastructure solutions

Fujitsu and Red Hat have jointly developed a dedicated solution to simplify the creation of OpenStack private clouds.

The Primeflex is a converged compute and storage combines Fujitsu’s server technology with Red Hat OpenStack and Red Hat Enterprise Linux OpenStack Platform software, and backed by Fujitsu’s professional services outfit.

The companies said the OpenStack-based converged offering will speed up cloud deployment.

Harald Bernreuther, director global infrastructure solutions at Fujitsu said: “Primeflex for Red Hat OpenStack can underpin any organisation’s plan to transform their business model by leveraging cloud computing. By opting for an OpenStack-based solution, organisations can run new cloud-scale workloads while also optimising costs.

“Primeflex for Red Hat OpenStack extends the philosophy of cost optimisation, through simplifying system maintenance and consolidating technology updates across the entire system stack, all the way from the underlying hardware through to the operating system,” Bernreuther said.

Red Hat said there is value in driving strong integration between software and hardware in the cloud space.

“OpenStack is a rapidly-growing, open source cloud infrastructure platform that is cost-effective, open, flexible and highly scalable,” said Radhesh Balakrishnan, general manager, OpenStack, Red Hat.

“We are excited about Fujitsu’s offering based on Red Hat Enterprise Linux OpenStack Platform to deliver private cloud infrastructure solutions and we look forward to continuing the collaboration to provide customers with an innovative cloud platform for digital business initiatives,” he said.

Red Hat isn’t the only OpenStack vendor boosting its converged infrastructure strategy as of late. In July Mirantis unveiled plans to work with a range of vendors, initially Dell and Juniper, to deliver OpenStack-based converged infrastructure solutions for enterprises.