小企业建站系统,兴海县网站建设公司,石家庄展厅设计公司,wordpress for sae4.5Flink 系列文章
1、Flink 部署、概念介绍、source、transformation、sink使用示例、四大基石介绍和示例等系列综合文章链接
13、Flink 的table api与sql的基本概念、通用api介绍及入门示例 14、Flink 的table api与sql之数据类型: 内置数据类型以及它们的属性 15、Flink 的ta…Flink 系列文章
1、Flink 部署、概念介绍、source、transformation、sink使用示例、四大基石介绍和示例等系列综合文章链接
13、Flink 的table api与sql的基本概念、通用api介绍及入门示例 14、Flink 的table api与sql之数据类型: 内置数据类型以及它们的属性 15、Flink 的table api与sql之流式概念-详解的介绍了动态表、时间属性配置如何处理更新结果、时态表、流上的join、流上的确定性以及查询配置 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及FileSystem示例1 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Elasticsearch示例2 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Kafka示例3 16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及JDBC示例4
16、Flink 的table api与sql之连接外部系统: 读写外部系统的连接器和格式以及Apache Hive示例6
20、Flink SQL之SQL Client: 不用编写代码就可以尝试 Flink SQL可以直接提交 SQL 任务到集群上
22、Flink 的table api与sql之创建表的DDL 24、Flink 的table api与sql之Catalogs介绍、类型、java api和sql实现ddl、java api和sql操作catalog-1 24、Flink 的table api与sql之Catalogsjava api操作数据库、表-2
26、Flink 的SQL之概览与入门示例 27、Flink 的SQL之SELECT (select、where、distinct、order by、limit、集合操作和去重)介绍及详细示例1 27、Flink 的SQL之SELECT (SQL Hints 和 Joins)介绍及详细示例2 27、Flink 的SQL之SELECT (窗口函数)介绍及详细示例3 27、Flink 的SQL之SELECT (窗口聚合)介绍及详细示例4 27、Flink 的SQL之SELECT (Group Aggregation分组聚合、Over Aggregation Over聚合 和 Window Join 窗口关联)介绍及详细示例5 27、Flink 的SQL之SELECT (Top-N、Window Top-N 窗口 Top-N 和 Window Deduplication 窗口去重)介绍及详细示例6 27、Flink 的SQL之SELECT (Pattern Recognition 模式检测)介绍及详细示例7
29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE1 29、Flink SQL之DESCRIBE、EXPLAIN、USE、SHOW、LOAD、UNLOAD、SET、RESET、JAR、JOB Statements、UPDATE、DELETE2 30、Flink SQL之SQL 客户端通过kafka和filesystem的例子介绍了配置文件使用-表、视图等 32、Flink table api和SQL 之用户自定义 Sources Sinks实现及详细示例 41、Flink之Hive 方言介绍及详细示例 42、Flink 的table api与sql之Hive Catalog 43、Flink之Hive 读写及详细验证示例 44、Flink之module模块介绍及使用示例和Flink SQL使用hive内置函数及自定义函数详细示例–网上有些说法好像是错误的 文章目录 Flink 系列文章五、Catalog API1、数据库操作1、jdbccatalog示例2、hivecatalog示例-查询指定数据库下的表名称3、hivecatalog示例-创建database 2、表操作 本文简单介绍了通过java api操作数据库、表分别提供了具体可运行的例子。 本文依赖flink和hive、hadoop集群能正常使用。 本文分为2个部分即数据库操作、表操作。 本文示例java api的实现是通过Flink 1.13.5版本做的示例SQL 如果没有特别说明则是Flink 1.17版本。
五、Catalog API
1、数据库操作
下文列出了一般的数据库操作示例是以jdbccatalog为示例flink的版本是1.17.0。 // create database
catalog.createDatabase(mydb, new CatalogDatabaseImpl(...), false);// drop database
catalog.dropDatabase(mydb, false);// alter database
catalog.alterDatabase(mydb, new CatalogDatabaseImpl(...), false);// get databse
catalog.getDatabase(mydb);// check if a database exist
catalog.databaseExists(mydb);// list databases in a catalog
catalog.listDatabases(mycatalog);1、jdbccatalog示例
pom.xml
propertiesencodingUTF-8/encodingproject.build.sourceEncodingUTF-8/project.build.sourceEncodingmaven.compiler.source1.8/maven.compiler.sourcemaven.compiler.target1.8/maven.compiler.targetjava.version1.8/java.versionscala.version2.12/scala.versionflink.version1.17.0/flink.version/propertiesdependenciesdependencygroupIdjdk.tools/groupIdartifactIdjdk.tools/artifactIdversion1.8/versionscopesystem/scopesystemPath${JAVA_HOME}/lib/tools.jar/systemPath/dependency!-- https://mvnrepository.com/artifact/org.apache.flink/flink-clients --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-clients/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-scala_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-java/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-streaming-scala_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-streaming-java/artifactIdversion${flink.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-api-scala-bridge_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-api-java-bridge/artifactIdversion${flink.version}/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-planner_2.12/artifactIdversion${flink.version}/versionscopetest/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-common/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-jdbc/artifactIdversion3.1.0-1.17/versionscopeprovided/scope/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-csv/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-json/artifactIdversion${flink.version}/version/dependencydependencygroupIdmysql/groupIdartifactIdmysql-connector-java/artifactIdversion5.1.38/version/dependency!-- https://mvnrepository.com/artifact/org.apache.flink/flink-table-planner --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-planner_2.12/artifactIdversion${flink.version}/versionscopetest/scope/dependency!-- https://mvnrepository.com/artifact/org.apache.flink/flink-table-planner-loader --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-planner-loader/artifactIdversion${flink.version}/versionscopeprovided/scope/dependency!-- https://mvnrepository.com/artifact/org.apache.flink/flink-table-runtime --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-runtime/artifactIdversion${flink.version}/versionscopeprovided/scope/dependency/dependenciesjava
import java.util.List;import org.apache.flink.connector.jdbc.catalog.JdbcCatalog;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.catalog.Catalog;
import org.apache.flink.table.catalog.exceptions.CatalogException;
import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;/*** author alanchan**/
public class TestJdbcCatalogDemo {/*** param args* throws DatabaseNotExistException* throws CatalogException*/public static void main(String[] args) throws CatalogException, DatabaseNotExistException {// envStreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv StreamTableEnvironment.create(env);// public JdbcCatalog(// String catalogName,// String defaultDatabase,// String username,// String pwd,// String baseUrl)// CREATE CATALOG alan_catalog WITH(// type jdbc,// default-database test?useSSLfalse,// username root,// password root,// base-url jdbc:mysql://192.168.10.44:3306// );Catalog catalog new JdbcCatalog(alan_catalog, test?useSSLfalse, root, 123456, jdbc:mysql://192.168.10.44:3306);// Register the catalogtenv.registerCatalog(alan_catalog, catalog);ListString tables catalog.listTables(test);
// System.out.println(test tables: tablesfor (String table : tables) {System.out.println(Databasetest tablestable);}}}运行结果
Databasetest tablesallowinsert
Databasetest tablesauthor
Databasetest tablesbatch_job_execution
Databasetest tablesbatch_job_execution_context
Databasetest tablesbatch_job_execution_params
Databasetest tablesbatch_job_execution_seq
Databasetest tablesbatch_job_instance
Databasetest tablesbatch_job_seq
Databasetest tablesbatch_step_execution
Databasetest tablesbatch_step_execution_context
Databasetest tablesbatch_step_execution_seq
Databasetest tablesbook
Databasetest tablescustomertest
Databasetest tablesdatax_user
Databasetest tablesdm_sales
Databasetest tablesdms_attach_t
Databasetest tablesdx_user
Databasetest tablesdx_user_copy
Databasetest tablesemployee
Databasetest tableshibernate_sequence
Databasetest tablespermissions
Databasetest tablesperson
Databasetest tablespersoninfo
Databasetest tablesrole
Databasetest tablesstudenttotalscore
Databasetest tablest_consume
Databasetest tablest_czmx_n
Databasetest tablest_kafka_flink_user
Databasetest tablest_merchants
Databasetest tablest_recharge
Databasetest tablest_user
Databasetest tablest_withdrawal
Databasetest tablesupdateonly
Databasetest tablesuser2、hivecatalog示例-查询指定数据库下的表名称
本示例需要在有hadoop和hive环境执行本示例是打包执行jar文件。 关于flink与hive的集成请参考42、Flink 的table api与sql之Hive Catalog
pom.xml propertiesencodingUTF-8/encodingproject.build.sourceEncodingUTF-8/project.build.sourceEncodingmaven.compiler.source1.8/maven.compiler.sourcemaven.compiler.target1.8/maven.compiler.targetjava.version1.8/java.versionscala.version2.12/scala.versionflink.version1.13.6/flink.version/propertiesdependenciesdependencygroupIdjdk.tools/groupIdartifactIdjdk.tools/artifactIdversion1.8/versionscopesystem/scopesystemPath${JAVA_HOME}/lib/tools.jar/systemPath/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-clients_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-scala_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-java/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-streaming-scala_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-streaming-java_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-api-scala-bridge_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-api-java-bridge_2.12/artifactIdversion${flink.version}/version/dependency!-- flink执行计划,这是1.9版本之前的 --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-planner_2.12/artifactIdversion${flink.version}/version/dependency!-- blink执行计划,1.11默认的 --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-planner-blink_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-table-common/artifactIdversion${flink.version}/version/dependency!-- flink连接器 --dependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-kafka_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-sql-connector-kafka_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-jdbc_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-csv/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-json/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-connector-hive_2.12/artifactIdversion${flink.version}/version/dependencydependencygroupIdorg.apache.hive/groupIdartifactIdhive-metastore/artifactIdversion2.1.0/version/dependencydependencygroupIdorg.apache.hive/groupIdartifactIdhive-exec/artifactIdversion3.1.2/version/dependencydependencygroupIdorg.apache.flink/groupIdartifactIdflink-shaded-hadoop-2-uber/artifactIdversion2.7.5-10.0/version/dependencydependencygroupIdmysql/groupIdartifactIdmysql-connector-java/artifactIdversion5.1.38/version!--version8.0.20/version --/dependency!-- 高性能异步组件Vertx --dependencygroupIdio.vertx/groupIdartifactIdvertx-core/artifactIdversion3.9.0/version/dependencydependencygroupIdio.vertx/groupIdartifactIdvertx-jdbc-client/artifactIdversion3.9.0/version/dependencydependencygroupIdio.vertx/groupIdartifactIdvertx-redis-client/artifactIdversion3.9.0/version/dependency!-- 日志 --dependencygroupIdorg.slf4j/groupIdartifactIdslf4j-log4j12/artifactIdversion1.7.7/versionscoperuntime/scope/dependencydependencygroupIdlog4j/groupIdartifactIdlog4j/artifactIdversion1.2.17/versionscoperuntime/scope/dependencydependencygroupIdcom.alibaba/groupIdartifactIdfastjson/artifactIdversion1.2.44/version/dependencydependencygroupIdorg.projectlombok/groupIdartifactIdlombok/artifactIdversion1.18.2/versionscopeprovided/scope/dependency/dependenciesbuildsourceDirectorysrc/main/java/sourceDirectoryplugins!-- 编译插件 --plugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-compiler-plugin/artifactIdversion3.5.1/versionconfigurationsource1.8/sourcetarget1.8/target!--encoding${project.build.sourceEncoding}/encoding --/configuration/pluginplugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-surefire-plugin/artifactIdversion2.18.1/versionconfigurationuseFilefalse/useFiledisableXmlReporttrue/disableXmlReportincludesinclude**/*Test.*/includeinclude**/*Suite.*/include/includes/configuration/plugin!-- 打包插件(会包含所有依赖) --plugingroupIdorg.apache.maven.plugins/groupIdartifactIdmaven-shade-plugin/artifactIdversion2.3/versionexecutionsexecutionphasepackage/phasegoalsgoalshade/goal/goalsconfigurationfiltersfilterartifact*:*/artifactexcludes!-- zip -d learn_spark.jar META-INF/*.RSA META-INF/*.DSA META-INF/*.SF --excludeMETA-INF/*.SF/excludeexcludeMETA-INF/*.DSA/excludeexcludeMETA-INF/*.RSA/exclude/excludes/filter/filterstransformerstransformer implementationorg.apache.maven.plugins.shade.resource.ManifestResourceTransformer!-- 设置jar包的入口类(可选) --mainClass org.table_sql.TestHiveCatalogDemo/mainClass/transformer/transformers/configuration/execution/executions/plugin/plugins/buildjava
import java.util.List;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.catalog.exceptions.CatalogException;
import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
import org.apache.flink.table.catalog.hive.HiveCatalog;/*** author alanchan**/
public class TestHiveCatalogDemo {/*** param args* throws DatabaseNotExistException * throws CatalogException */public static void main(String[] args) throws CatalogException, DatabaseNotExistException {StreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv StreamTableEnvironment.create(env);String name alan_hive;// testhive 数据库名称String defaultDatabase testhive;String hiveConfDir /usr/local/bigdata/apache-hive-3.1.2-bin/conf;HiveCatalog hiveCatalog new HiveCatalog(name, defaultDatabase, hiveConfDir);tenv.registerCatalog(alan_hive, hiveCatalog);// 使用注册的catalogtenv.useCatalog(alan_hive);ListString tables hiveCatalog.listTables(defaultDatabase); // tables should contain test
// System.out.println(test tables: tablesfor (String table : tables) {System.out.println(Databasetesthive tables table);}}}运行结果
################hive查询结果##################
0: jdbc:hive2://server4:10000 use testhive;
No rows affected (0.021 seconds)
0: jdbc:hive2://server4:10000 show tables;
-----------------------
| tab_name |
-----------------------
| apachelog |
| col2row1 |
| col2row2 |
| cookie_info |
| dual |
| dw_zipper |
| emp |
| employee |
| employee_address |
| employee_connection |
| ods_zipper_update |
| row2col1 |
| row2col2 |
| singer |
| singer2 |
| student |
| student_dept |
| student_from_insert |
| student_hdfs |
| student_hdfs_p |
| student_info |
| student_local |
| student_partition |
| t_all_hero_part_msck |
| t_usa_covid19 |
| t_usa_covid19_p |
| tab1 |
| tb_dept01 |
| tb_dept_bucket |
| tb_emp |
| tb_emp01 |
| tb_emp_bucket |
| tb_json_test1 |
| tb_json_test2 |
| tb_login |
| tb_login_tmp |
| tb_money |
| tb_money_mtn |
| tb_url |
| the_nba_championship |
| tmp_1 |
| tmp_zipper |
| user_dept |
| user_dept_sex |
| users |
| users_bucket_sort |
| website_pv_info |
| website_url_info |
-----------------------
48 rows selected (0.027 seconds)################flink查询结果##################
[alanchanserver2 bin]$ flink run /usr/local/bigdata/flink-1.13.5/examples/table/table_sql-0.0.1-SNAPSHOT.jar
Databasetesthive tablesstudent
Databasetesthive tablesuser_dept
Databasetesthive tablesuser_dept_sex
Databasetesthive tablest_all_hero_part_msck
Databasetesthive tablesstudent_local
Databasetesthive tablesstudent_hdfs
Databasetesthive tablesstudent_hdfs_p
Databasetesthive tablestab1
Databasetesthive tablesstudent_from_insert
Databasetesthive tablesstudent_info
Databasetesthive tablesstudent_dept
Databasetesthive tablesstudent_partition
Databasetesthive tablesemp
Databasetesthive tablest_usa_covid19
Databasetesthive tablest_usa_covid19_p
Databasetesthive tablesemployee
Databasetesthive tablesemployee_address
Databasetesthive tablesemployee_connection
Databasetesthive tablesdual
Databasetesthive tablesthe_nba_championship
Databasetesthive tablestmp_1
Databasetesthive tablescookie_info
Databasetesthive tableswebsite_pv_info
Databasetesthive tableswebsite_url_info
Databasetesthive tablesusers
Databasetesthive tablesusers_bucket_sort
Databasetesthive tablessinger
Databasetesthive tablesapachelog
Databasetesthive tablessinger2
Databasetesthive tablestb_url
Databasetesthive tablesrow2col1
Databasetesthive tablesrow2col2
Databasetesthive tablescol2row1
Databasetesthive tablescol2row2
Databasetesthive tablestb_json_test1
Databasetesthive tablestb_json_test2
Databasetesthive tablestb_login
Databasetesthive tablestb_login_tmp
Databasetesthive tablestb_money
Databasetesthive tablestb_money_mtn
Databasetesthive tablestb_emp
Databasetesthive tablesdw_zipper
Databasetesthive tablesods_zipper_update
Databasetesthive tablestmp_zipper
Databasetesthive tablestb_emp01
Databasetesthive tablestb_emp_bucket
Databasetesthive tablestb_dept01
Databasetesthive tablestb_dept_bucket
3、hivecatalog示例-创建database
本示例着重在于演示如何创建database其如何构造函数来创建database。
pom.xml 参考示例2java
import java.util.HashMap;
import java.util.List;
import java.util.Map;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.catalog.CatalogDatabase;
import org.apache.flink.table.catalog.CatalogDatabaseImpl;
import org.apache.flink.table.catalog.exceptions.CatalogException;
import org.apache.flink.table.catalog.exceptions.DatabaseAlreadyExistException;
import org.apache.flink.table.catalog.exceptions.DatabaseNotExistException;
import org.apache.flink.table.catalog.hive.HiveCatalog;/*** author alanchan**/
public class TestHiveCatalogDemo {/*** param args* throws DatabaseNotExistException* throws CatalogException* throws DatabaseAlreadyExistException*/public static void main(String[] args) throws CatalogException, DatabaseNotExistException, DatabaseAlreadyExistException {StreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();StreamTableEnvironment tenv StreamTableEnvironment.create(env);String name alan_hive;// testhive 数据库名称String defaultDatabase testhive;String hiveConfDir /usr/local/bigdata/apache-hive-3.1.2-bin/conf;HiveCatalog hiveCatalog new HiveCatalog(name, defaultDatabase, hiveConfDir);tenv.registerCatalog(alan_hive, hiveCatalog);// 使用注册的catalogtenv.useCatalog(alan_hive);ListString tables hiveCatalog.listTables(defaultDatabase);for (String table : tables) {System.out.println(Databasetesthive tables table);}// public CatalogDatabaseImpl(MapString, String properties, Nullable String comment) {
// this.properties checkNotNull(properties, properties cannot be null);
// this.comment comment;
// }MapString, String properties new HashMap();CatalogDatabase cd new CatalogDatabaseImpl(properties, this is new database,the name is alan_hivecatalog_hivedb);String newDatabaseName alan_hivecatalog_hivedb;hiveCatalog.createDatabase(newDatabaseName, cd, true);ListString newTables hiveCatalog.listTables(newDatabaseName);for (String table : newTables) {System.out.println(Databasealan_hivecatalog_hivedb tables table);}}}运行结果
################## hive查询结果 ############################
#####提交flink创建database前查询结果
0: jdbc:hive2://server4:10000 show databases;
----------------
| database_name |
----------------
| default |
| test |
| testhive |
----------------
3 rows selected (0.03 seconds)
#####提交flink创建database后查询结果
0: jdbc:hive2://server4:10000 show databases;
--------------------------
| database_name |
--------------------------
| alan_hivecatalog_hivedb |
| default |
| test |
| testhive |
--------------------------
4 rows selected (0.023 seconds)################## flink 查询结果 ############################
#### 由于只创建了database其下是没有表的故没有输出。至于testhive库下的表输出详见示例2不再赘述。2、表操作
表操作就是指hivecatalog的操作因为jdbccatalog不能对库、表进行操作当然查询类是可以的。故以下示例都是以hivecatalog进行说明。本处与24、Flink 的table api与sql之Catalogs介绍、类型、java api和sql实现ddl、java api和sql操作catalog-1的第三部分相似具体参考其示例即可。不再赘述。
// create table
catalog.createTable(new ObjectPath(mydb, mytable), new CatalogTableImpl(...), false);// drop table
catalog.dropTable(new ObjectPath(mydb, mytable), false);// alter table
catalog.alterTable(new ObjectPath(mydb, mytable), new CatalogTableImpl(...), false);// rename table
catalog.renameTable(new ObjectPath(mydb, mytable), my_new_table);// get table
catalog.getTable(mytable);// check if a table exist or not
catalog.tableExists(mytable);// list tables in a database
catalog.listTables(mydb);本文简单介绍了通过java api操作数据库、表分别提供了具体可运行的例子。