Skip to main content

Posts

Machine Learning Connector in Presto

This is quick tutorial for presto-ml connector. The connector isn't maintenanced actively and the supported model is only SVM. You can see below sample query in the test directory. As the same as Teradata Aster and BigQuery ML, there're two kinds of functions. learn_classifier: receives training data and generates the model classify: receives the model & test data and returns the prediction SELECT classify (features(1, 2), model) FROM (  SELECT learn_classifier (labels, features) AS model  FROM (   VALUES (1, features(1, 2))) t(labels, features) ) t2 → 1 SELECT classify (features(1, 2), model) FROM (  SELECT learn_classifier (labels, features) AS model  FROM (   VALUES ('cat', features(1, 2))) t(labels, features) ) t2 → 'cat' Let's try using Iris data sets. CREATE TABLE iris (   id int , sepal_length double , sepal_width double , petal_length double , petal_width double , species varchar ) INSERT INT...

INSERT OVERWRITE in Presto

If you are hive user and ETL developer, you may see a lot of INSERT OVERWRITE. Though it's not yet documented, Presto also supports OVERWRITE mode for partitioned table. Currently, there are 3 modes, OVERWRITE, APPEND and ERROR. OVERWRITE overwrites existing partition. APPEND appends rows in existing partition. ERROR fails when the partition already exists. You can change the mode by set session command. set session hive.insert_existing_partitions_behavior = 'overwrite'; set session hive.insert_existing_partitions_behavior = 'append'; set session hive.insert_existing_partitions_behavior = 'error'; The enhanced feature for an unpartitioned table is ongoing in this PR ( https://github.com/prestosql/presto/pull/648 ) by James Xu . The enhancement was merged as  https://github.com/prestosql/presto/pull/924

MSCK in Trino

Presto SQL release 304 contains new procedure system.sync_partition_metadata() developed by @luohao .  This is similar to hive's  MSCK REPAIR TABLE . Document about Hive Connector Procedures is  https://prestosql.io/docs/current/connector/hive.html#procedures The syntax is `system.sync_partition_metadata(schema_name, table_name, mode)`. The supported mode are add, drop and full. Example query is call system.sync_partition_metadata('default', 'test_partition', 'add'); call system.sync_partition_metadata('default', 'test_partition', 'drop'); call system.sync_partition_metadata('default', 'test_partition', 'full'); # Mode DROP hive> create table default.test_partition (c1 int) partitioned by (dt string); hive> insert overwrite table default.test_partition partition(dt = '20190101') values (1); hive> dfs -mv hdfs://hadoop-master:9000/user/hive/warehouse/test_partition/dt=20190101 /...

Kanazawa 2018

Went to Kanazawa at Oct 27 and 28. I would recommend to eat 'Fu' in addition to the seafoods. It was really delicious. Day 1st  金沢駅→ひがし茶屋街→香林坊→兼六園→近江町市場→銭湯→ホテル Day 2nd 近江町市場→金沢21世紀博物館→鈴木大拙館→金沢駅 I wanted to go Kanazawa Umimirai Library, but it was closed. I should have investigated it before booking the hotel. Kanazawa st. Misodare Dengaku Fu Noren Kenrokuen GNOME Sento Omicho-Ichiba 21st Century Museum of Contemporary Art, Kanazawa D.T. Suzuki Museum

Bulk Insert to Teradata using Python ​

This snippet is bulk-loading csv to Teradata via Python. Recently teradatasql  was released, but this code uses  PyTd . If you haven't setup PyTd, please install the library by `pip install teradata`. import teradata import csv udaExec = teradata.UdaExec() session = udaExec.connect("tdpid") data = list(csv.reader(open("testExecuteManyBach.csv"))) batchsize = 10000 for num in range(0, len(data), batchsize):     session.executemany("insert into testExecuteManyBatch values (?, ?, ?, ?)"),     data[num:num+batchsize], batch= True ) The points are batch=True and specifying batchsize. If you don't set the batchsize or the size is too large, it will be failed (forgot the actual message though). The performance in my environment (1 node) was 10,000 rows/sec. The table has 4 columns. I assume tens of thousands looks fine, but more rows should be imported with FastLoad or MLOAD.

Teradata XMLAGG

SQLServerやPostgreSQLにあるstring_aggやMySQLのgroup_concatですが残念なことにTeradataでは存在しません。代わりにという訳ではないですが、 xmlagg という集約関数があり、これを使うと同じようなことが実現できます。 まずはデータを準備します。 drop table test_xml_agg ; create table test_xml_agg (  c1 int ,c2 int ,c3 varchar(10) ) ; insert into test_xml_agg values (1,1,'hello'); insert into test_xml_agg values (1,2,'world'); insert into test_xml_agg values (2,1,'this'); insert into test_xml_agg values (2,2,'is'); insert into test_xml_agg values (2,3,'xmlagg'); 上記のデータを1列目で集約し、2列目の順番で一度3列目を横に展開しカンマで結合するというクエリを書いてみます。 select  c1    ,trim(trailing ',' from xmlagg (c3 || ',' order by c2) (varchar(100))) as string_agg from test_xml_agg group by 1 ; Result Set c1 string_agg 1  hello, world 2  this, is, xmlagg string_aggに比べるとごちゃちゃしていますが、まず xmlagg(c3 || ',' order by c2) で2列目の昇順で3列目を結合していくことを表しています。続いて (varchar(100)) で型をsysudtlib.xmlからvarcharにキャストし、最後に末尾のカンマを除いています。 2018/10/19追記 tdstats.udfco...

Short circuit on Teradata

My colleague found that some queries on Teradata are improved by  Short-circuit evaluation . This is common knowledge among software engineers, but DBA may not know it. I knew it short circuit, but I didn't know it is effective to sql. For example, following query seems not bad and you may think everything is ok. (Please forget 'like any' since it is rewritten internally) SELECT  * FROM t1 WHERE  c1 LIKE '%a0001%' OR  c1  LIKE  '%a0002%' OR  c1  LIKE  %a0003%' ... OR c1  LIKE  '%a9999%' ; By using short circuit, this query can be rewritten like this.  SELECT  * FROM t1 WHERE  c1  LIKE  '%a%' and  (  c1  LIKE  '%a0001%'  OR c1  LIKE  '%a0002%'   OR  c1  LIKE  '%a0003%'  ...   OR  c1  LIKE  '%a9999%' ) ; Of course, this rewrite isn't effective for all situations. It depends on the data character...