Archivo de la categoría: MySQL

MySQL: Your password does not satisfy the current policy requirements

Mediante el plugin validate_password podemos definir unos requisitos minimos de seguridad para las contraseñas de MySQL:

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'%' identified by 'demopassword';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'1.2.3.4' identified by 'demopassword';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements

Evidentemente para ciertos entornos dicho plugin puede resultar un problema por lo que podemos querer desactivarlo.

Podemos desactivar el plugins mediante el siguiente comando:

mysql> uninstall plugin validate_password;
Query OK, 0 rows affected (0.04 sec)

A continuación los password ya no serán examinados con los criterios mínimos de seguridad:

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'%' identified by 'demopassword';
Query OK, 0 rows affected, 1 warning (0.06 sec)

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER ON `dbdemo`.* TO 'demouser'@'1.2.3.4' identified by 'demopassword';
Query OK, 0 rows affected, 1 warning (0.01 sec)

Tags:

Google upgrades Cloud SQL, promises managed MySQL offerings

Google officeGoogle has announced the beta availability of a new improved Cloud SQL for Google Cloud Platform – and an alpha version of its much anticipated Content Delivery Network offering.

In a blog post Brett Hesterberg, Product Manager for Google’s Cloud Platform, says the second generation of Cloud SQL will aim to give better performance and more ‘scalability per dollar’.

In Google’s internal testing, the second generation Cloud SQL proved seven times faster than the first generation and it now scales to 10TB of data, 15,000 IOPS and 104GB of RAM per instance, Hesterberg said.

The upshot is that transactional databases now have a flexibility that was unachievable with traditional relational databases. “With Cloud SQL we’ve changed that,” Hesterberg said. “Flexibility means easily scaling a database up and down.”

Databases can now ramp up and down in size and the number of queries per day. The allocation of resources like CPU cores and RAM can be more skilfully adapted with Cloud SQL, using a variety of tools such as MySQL Workbench, Toad and the MySQL command-line. Another promised improvement is that any client can be used for access, including Compute Engine, Managed VMs, Container Engine and workstations.

In the new cloud environment databases need to be easier to stop and restart if they are only used on occasion for brief or infrequent tasks, according to Hesterberg. Cloud SQL now caters for these increasingly common cloud applications of database technology through the Cloud Console, the command line within Google’s gCloud SDK or a RESTful API. This makes admin a scriptable job and minimises costs by only running the databases when necessary.

Cloud SQL will create more manageable MySQL databases, claims Hesterberg, since Google will apply patches and updates to MySQL, manage backups, configure replication and provide automatic failover for High Availability (HA) in the event of a zone outage. “It means you get Google’s operational expertise for your MySQL database,” says Hesterberg. Subscribers signed up for Google Cloud Platform can now get a $300 credit to test drive Cloud SQL, it announced.

Meanwhile in another Google blog, it announced an alpha release of its own content delivery network, Google Cloud CDN. The system may not be consistent and is not recommended for production use, Google warned.

Google Cloud CDN will speed up its cloud services using distributed edge caches to bring content closer to users in a bid to compensate for its relatively low global data centre coverage against rivals AWS and Azure.

Percona buys Tokutek to mashup MySQL, NoSQL tech

Percona acquired Tokutek to strengthen its expertise in NoSQL

Percona acquired Tokutek to strengthen its expertise in NoSQL

Relational database services firm Percona announced it has acquired Tokutek, which provides a high-performance MongoDB distribution and NoSQL services. Percona said the move will allow it to improve support for non-relational database technologies.

Tokutek offers a distribution of MongoDB, called TokuMX, which the company pitches as a drop-in replacement for MongoDB – but with up to 20 times performance improvements and 90 per cent reduction in database size.

One of the things that makes it so performant is its deployment of fractal tree indexing, a data structure that optimises I/O while allowing for simultaneous search and sequential access but with much faster insertions and deletions (it can also be applied in MariaDB).

Percona said the move will position the company to offer the full range consulting and technology services to support MySQL and MongoDB deployments; Percona Server already supports TokuMX as an option but the move will see the later further integrated and ship standard with the former.

“This acquisition delivers game-changing advantages to our customers,” said Peter Zaitsev, co-founder and chief executive of Percona. “By adding a market-leading, ACID-compliant NoSQL data management option to our product line, customers finally have the opportunity to simplify their database decisions and on-going support relationships by relying on just one proven, expert provider for all their database design, service, management, and support needs.”

John Partridge, president and chief executive of Tokutek said: “Percona has a well-earned reputation for expert database consulting services and support. With the Tokutek acquisition, Percona is uniquely positioned to offer NoSQL and NewSQL software solutions backed by unparalleled services and support. We are excited to know Tokutek customers can look forward to leveraging Percona services and support in their TokuMX and TokuDB deployments.”

NoSQL adoption is growing at a fairly fast rate as applications shift to handle more and more unstructured data (espeically cloud apps), so it’s likely we’ll see more MySQL incumbents pick up non-relational startups in the coming months.

 

YouTube brings Vitess MySQL scaling magic to Kubernetes

YouTube is working to integrate a beefed up version of MySQL with Kubernetes

YouTube is working to integrate a beefed up version of MySQL with Kubernetes

YouTube is working to integrate Vitess, which improves the ability of MySQL databases to scale in containerised environments, with Kubernetes, an open source container deployment and management tool.

Vitess, which is available as an open source project and pitched as a high-concurrency alternative to NoSQL and vanilla MySQL databases, uses a BSON-based protocol which creates very lightweight connections (around 32KB), and its pooling feature uses Go’s concurrency support to map these lightweight connections to a small pool of MySQL connections; Vitess can handle thousands of connections.

It also handles horizontal and vertical sharding, and can dynamically re-write queries that could impede the database performance.

Anthony Yeh, a software engineer at YouTube said the company is currently using the service to handle metadata for the company’s video service, which handles billions of daily video views and 300 hours of new video uploads per minute.

“Your new website is growing exponentially. After a few rounds of high fives, you start scaling to meet this unexpected demand. While you can always add more front-end servers, eventually your database becomes a bottleneck.”

“Vitess is available as an open source project and runs best in a containerized environment. With Kubernetes and Google Container Engine as your container cluster manager, it’s now a lot easier to get started. We’ve created a single deployment configuration for Vitess that works on any platform that Kubernetes supports,” he explained in a blog post on the Google Cloud Platform website. “In this environment, Vitess provides a MySQL storage layer with improved durability, scalability, and manageability.”

Yeh said the company is just getting started with the Kubernetes integration, but once users will be able to deploy Vitess in containers with Kubernetes on any cloud platform supported by it.

Bull Services Facilitate Adoption of Open Source PostgreSQL

Bull HN Information Systems is rolling out IT support services with the launch of its MOVE IT (Modernize, Optimize, Virtualize and Economize Information Technology) campaign to showcase products and services that it recently announced and plans to announce in the future. These products and services help customers derive maximum value from their legacy IT investments and get the most out of their IT operations while opening enterprise data to the cloud and mobile devices.

Bull’s newest MOVE IT service offerings are PostgreSQL support subscriptions; database design and build assessments; database performance and tuning services; and forms and reports migration services. These service offerings support migration to PostgreSQL—recognized as the world’s most advanced open source database—enabling organizations to reduce costs and open enterprise data to the cloud and virtualized environments.

According to Bull’s Data Migration Business Unit Director Jim Ulrey, “MOVE IT services and software help free companies from high licensing and maintenance costs, and offer both dramatic operational efficiencies and the agility required to flourish in competitive business environments.

“We developed MOVE IT enterprise solutions including database migration services, software and our newest support services to meet the needs of IT departments that prefer to manage work internally, as well as those that prefer to outsource—whether due to skill sets, resources or project objectives,” concluded Ulrey.

Bull’s MOVE IT products and services work effectively standalone by providing solutions to specific challenges, and they’re also engineered to work together to provide enterprise IT clients with multiple benefits. From cost-saving database migrations from Oracle to flexible open source PostgreSQL, to LiberTP software to migrate transaction-processing applications, Bull’s solutions open enterprise data to modern environments that support the cloud, virtualized environments and mobile devices. Most importantly, Bull can help free companies from high licensing and maintenance costs, offer dramatic operational efficiencies and the agility required to flourish in competitive business environments.


Bursting into the Clouds – Experimenting with Cloud Bursting

Guest Post by Dotan Horovits, Senior Solutions Architect at GigaSpaces

Dotan Horovits is one of the primary architects at GigaSpaces

Dotan Horovits is one of the primary architects at GigaSpaces

Who needs Cloud Bursting?

We see many organizations examining Cloud as replacement for their existing in-house IT. But we see interest in cloud even among organizations that have no plan of replacing their traditional data center.

One prominent use case is Cloud Bursting:

Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and bursts into a public cloud when the demand for computing capacity spikes. The advantage of such a hybrid cloud deployment is that an organization only pays for extra compute resources when they are needed.
[Definition from SearchCloudComputing]

Cloud Bursting appears to be a prominent use case in cloud on-boarding projects. In a recent post, Nati Shalom summarizes nicely the economical rationale for cloud bursting and discusses theoretical approaches for architecture. In this post I’d like to examine the architectural challenges more closely and explore possible designs for Cloud Bursting.

Examining Cloud Bursting Architecture

Overflowing compute to the cloud is addressed by workload migration: when we need more compute power we just spin up more VMs in the cloud (the secondary site) and install instances of the application. The challenge in workload migration is around how to build a consistent environment in the secondary site as in the primary site, so the system can overflow transparently. This is usually addressed by DevOps tools such as ChefPuppetCFEngine and Cloudify, which capture the setup and are able to bootstrap the application stack on different environments. In my example I used Cloudify to provide consistent installation between EC2 and RackSpace clouds.

The Cloud Bursting problem becomes more interesting when data is concerned. In his post Nati mentions two approaches for handling data during cloud bursting:

1. The primary site approach – Use the private cloud as the primary data site, and then point all the burst activity to that site.
2. Federated site approach – This approach is similar to the way Content Distribution Networks (CDN) work today. With this approach we maintain a replica of the data available at each site and keep their replicas in sync.

The primary site approach incurs heavy penalty in latency, as each computation needs to make the round trip to the primary site to get the data for the computation. Such architecture is not applicable to online flows.

The federated site approach uses data synchronization to bring the data to the compute, which saves the above latency and enables online flows. But if we want to support “hot” bursting to the cloud, we have to replicate the data between the sites in an ongoing streaming fashion, so that the data is available on the cloud as soon as the peak occurs and we can spin up compute instances and immediately start to redirect load. Let’s see how it’s done.

Cloud Bursting – Examining the Federated Site Approach

Let’s put up our sleeves and start experimenting hands-on with the federated site approach for Cloud Bursting architecture. As reference application let’s take Spring’s PetClinic Sample Application and run it on an Apache Tomcat web container. The application will persist its data locally to a MySQL relational database.

The primary site, representing our private data center, will run the above stack and serve the PetClinic online service. The secondary site, representing the public cloud, will only have a MySQL database, and we will replicate data between the primary and secondary sites to keep data synchronized. As soon as the load on the primary site increases beyond a certain threshold, we will spin up a machine with an instance of Tomcat and the PetClinic application, and update the load balancer to offload some of the traffic to the secondary site.

On my experiment I used Amazon EC2 and RackSpace IaaS providers to simulate the two distinct environments of the primary and secondary sites, but any on-demand environments will do.

REPLICATING RDBMS DATA OVER WAN

How do we replicate data between the MySQL database instances over WAN? On this experiment we’ll use the following pattern:

1.     Monitor data mutating SQL statements on source site. Turn on the MySQL query log, and write a listener (“Feeder”) to intercept data mutating SQL statements, then write them to GigaSpaces In-Memory Data Grid.

2.     Replicate data mutating SQL statements over WAN. I used GigaSpaces WAN Replication to replicate the SQL statements  between the data grids of the primary and secondary sites in a real-time and transactional manner.

3.     Execute data mutating SQL statements on target site. Write a listener (“Processor”) to intercept incoming SQL statements on the data grid and execute them on the local MySQL DB.

 

 

 

 

 

 

 

 

 

 

 

 

To support bi-directional data replication we simply deploy both the Feeder and the Processor on each site.

AUTO-BOOTSTRAP SECONDARY SITE

When peak load occurs, we need to react immediately, and perform a series of operations to activate the secondary site:

1.     spin up compute nodes (VMs)

2.     download and install Tomcat web server

3.     download and install the PetClinic application

4.     configure the load balancer with the new node

5.     when peak load is over – perform the reverse flow to tear down the secondary site

We need to automate this bootstrap process to support real-time response to peak-load events. How do we do this automation? I used GigaSpacesCloudify open-source product as the automation tool for setting up and for taking down the secondary site, utilizing the out-of-the-box connectors for EC2 and RackSpace. Cloudify also provides self-healing  in case of VM or process failure, and can later help in scaling the application (in case of clustered applications).

Implementation Details

The result of the above experimentation is available on GitHub. It contains:

§  DB scripts for setting up the logging, schema and demo data for the PetClinic application

§  PetClinic application (.war) file

§  WAN replication gateway module

§  Cloudify recipe for automating the PetClinic deployment

See the documentation on GitHub for detailed instructions on how to configure the above with your specific deployment details.

Conclusion

Cloud Bursting is a common use case for cloud on-boarding, which requires good architecture patterns. In this post I tried to suggest some patterns and experiment with a simple demo, sharing it with the community to get feedback and raise discussion on these cloud architectures.

More information can be seen at an upcoming GigaSpaces webinar on Transactional Cross-Site Data Replication on June 20th (register at: http://bit.ly/IM0w9F)