Details
-
Bug
-
Status: Closed
-
Major
-
Resolution: Duplicate
-
1.10.0
-
None
Description
when run yarn single job run many container but paralism set 4
scripts:
./bin/flink run -m yarn-cluster -ys 3 -p 4 -yjm 1024m -ytm 4096m -yqu bi -c com.cc.test.HiveTest2 ./cc_jars/hive-1.0-SNAPSHOT.jar 11.txt test61 6
notes: in 1.9.1 has cli paramter -yn to control the number of containers and in 1.10 remove it
result:
the number of containers is 500+
code use:
query the table and save it to the hdfs text
the storge of table is 200g+
code:
com.cc.test.HiveTest2
public static void main(String[] args) throws Exception
{ EnvironmentSettings settings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build(); StreamExecutionEnvironment settings2 = StreamExecutionEnvironment.getExecutionEnvironment(); settings2.setParallelism(Integer.valueOf(args[2])); StreamTableEnvironment tableEnv = StreamTableEnvironment.create(settings2, settings); String name = "myhive"; String defaultDatabase = "test"; String hiveConfDir = "/etc/hive/conf"; String version = "1.2.1"; // or 1.2.1 2.3.4 HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version); tableEnv.registerCatalog("myhive", hive); tableEnv.useCatalog("myhive"); tableEnv.listTables(); Table table = tableEnv.sqlQuery("select id from orderparent_test2 where id = 'A000021204170176'"); tableEnv.toAppendStream(table, Row.class).print(); tableEnv.toAppendStream(table, Row.class) .writeAsText("hdfs:///user/chenchao1/"+ args[0], FileSystem.WriteMode.OVERWRITE); tableEnv.execute(args[1]); }
Attachments
Attachments
Issue Links
- duplicates
-
FLINK-16605 Add max limitation to the total number of slots
- Closed
- relates to
-
FLINK-16605 Add max limitation to the total number of slots
- Closed