Tech

Data measurement for Teradata and Oracle customers: Migrate to the cloud without changing code

Teradata is not only its size and performance, but also advanced, designed to handle very complex functions such as recursive queries, implicit joins, unique syntax, and custom logic for parallelizing workloads. It has long been highlighted by the SQL engine. As a result, Teradata has long positioned itself for organizations with the most difficult analytical problems. And finally, Vantage aggressively embraced the cloud.

The world’s Redshifts, Synapses, Snowflakes, and Big Queries are positioned as the latest pay-as-you-go cloud, hyperscale alternatives that offer a more economical alternative to traditional Teradata platforms. Functional gaps and requirements for modifying source code and schemas are often the focus of migration.

Of course, there are startups that think there is an answer to that.

According to Datometry, the answer is database virtualization, not data virtualization. The approach is to insert a runtime that acts as a buffer between the Teradata SQL statement and the target cloud data warehouse. The idea is to allow Teradata customers to execute Teradata queries on a variety of targets without modifying or completely rewriting existing SQL programs. Its product, HyperQ, adds Oracle to its list of database sources.

At the core of Datometry’s approach is a unique hypervisor that emulates SQL database calls on the fly. Internally, it breaks down these complex calls, stored procedures, and macros into atomic operations that the target data warehouse needs to understand. For example, a recursive query used to query a nested or hierarchical data structure is transformed on the fly into a series of simple individual calls to the target, with intermediate results stored in a temporary table managed by the hypervisor. Will be done. These operations can be complex and provide policy-based queuing that matches the existing policies running at the source. Provides JDBC and ODBC APIs for BI and ETL tools.

Of course, Datometry is not the first person to say “don’t change the program”. There are SQL translators, but Datometry claims that they are often inadequately effective. They estimate that the code converter needs to handle about 60-70% of all workloads. The traditional workaround was to add non-SQL code to the application to make up for the difference between Teradata SQL and the target database SQL. Similarly, custom data types and structures are often overlooked by cloud database schema migration tools.

Can Datometry handle all the peculiarities of Teradata SQL? The company claims to cover 99% of its Teradata workloads. Certainly it costs money. The virtualization layer of Datometry adds 1-2% of the overhead, but as the EPA evaluates, mileage depends on the workload. The company claims that it is a small price to pay compared to the overhead of maintaining code from SQL code and schema conversion tools.

Datometry performed its first proof of concept using SQL Server on an on-premises HPE Superdome machine about four years ago, and then began supporting Azure Synapse and Google BigQuery in the cloud. As mentioned above, we have just announced a preview of Oracle. Importantly, Datometry hasn’t targeted Amazon Redshift or Snowflake yet, so it’s still suspended.

Data measurement for Teradata and Oracle customers: Migrate to the cloud without changing code

Source link Data measurement for Teradata and Oracle customers: Migrate to the cloud without changing code

Back to top button