Uploaded image for project: 'Traffic Server'

Traffic Server

Source changes - FishEye

Shows the 20 most recent commits for Traffic Server.

Christopher Collins <ccollins@apache.org> committed e1aa9abfe5f6219b5f9268a2599f37432be60ba1 (1 file)
Reviews: none

MYNEWT-568 log - newtmgr logs show; idx before ts.
Now that the index is the "primary key," it comes earlier in the
argument list than the optional timestamp.

Roberta Marton <rmarton@edev07.esgyn.local> committed 913d2337e029a0f904539a1d9d6ea064f90aa6ab (2 files)
Reviews: none

[TRAFODION-2301]: Hadoop crash with logs TMUDF
Today the UDF event_log_reader scans all logs, loads events into memory and
then discards the rows that are not needed. Waiting until the end to discard
rows takes too much memory and causes system issues.

The immediate solution is to use predicate pushdown; that is, specify predicates
on the query using the event_log_reader UDF to limit the scope of the data flow.
These predicates will be pushed into the UDF so the UDF only returns the
required rows instead of all the rows. Initially only comparison predicates are
pushed down to the event_log_reader UDF.

In addition to predicate pushdown, a new option has been added to the
event_log_reader UDF - the 's' (statistics) option. This option reports how
many log files were accessed, how many records were read, and how many records
were returned. By specifying timestamp ranges, severity types, sql_codes, and
the like, the number of returned rows can be reduced.

Example output:

Prior to change:

select count(*) from udf(event_log_reader('s'))
  where severity = 'INFO' and
        log_ts between '2016-10-18 00:00:00' and '2016-10-18 22:22:22';

(16497) EVENT_LOG_READER results:
          number log files opened: 113, number log files read: 113,
          number rows read: 2820, number rows returned: 2736

After change:

select count(*) from udf(event_log_reader('s'))
  where severity = 'INFO' and
  log_ts between '2016-10-18 00:00:00' and '2016-10-18 22:22:22';

(17046) EVENT_LOG_READER results:
          number log files opened: 115, number log files read: 115,
          number rows read: 2823, number rows returned: 109

Marshall Schor committed 1759859 (1 file)
Reviews: none

[UIMA-4674] [UIMA-4685] allow for reuse of ts after commit, catch up merge to return proper SerialFormat

Andrea Cosentino <ancosen@gmail.com> committed 6f528956ac22c20b3be6dfd3587765624bbcaeb4 (1 file)
Reviews: none

Upgrade cxf-xjc-ts to version 3.0.5

Stefan Fuhrmann committed 1679859 (1 file)
Reviews: none

On the 1.10-cache-improvements branch:
Instead of carefully limiting the key sizes and checking those limits,
make all length fields in entry_t and entry_key_t SIZE_Ts. This saves
a number of down-/shortening casts as well as key-length limiter code.
The limit for what item size we will actually cache stays in place.

On the downside, we each entry bucket (entry_group_t) can hold only 7
entries now - down from 10 in /trunk. This is due to added fields to
and enlarging fields in the entry_t struct.

This practically undoes r1679679 and r1679687.

* subversion/libsvn_subr/cache-membuffer.c
  (entry_key_t): Extend KEY_LEN to size_t and reorder members to give them
                 natural alignment.
  (entry_t): Extend item SIZE element to size_t.
  (membuffer_cache_set_internal,
   membuffer_cache_set_partial_internal): Remove obsolete shortening casts.
  (combine_long_key,
   svn_cache__create_membuffer_cache): Use size_t with all lengths. Drop key
                                       length limiter code and conversions.

Tharaknath Capirala <capirala.tharaknath@hp.com> committed a2975c28b7ae72d8013b9a29d8fbbaefa1ef0434 (6 files)
Reviews: none

New and updated repository columns
Repository column changes...

"_REPOS_".METRIC_QUERY_TABLE

Added:
QUERY_STATUS
QUERY_SUB_STATUS --> for future use

----------------------------------------------------

"_REPOS_".METRIC_QUERY_AGGR_TABLE;

Added:
SESSION_START_UTC_TS
AGGREGATION_LAST_UPDATE_UTC_TS
AGGREGATION_LAST_ELAPSED_TIME
TOTAL_DDL_STMTS --> Falls under OTHER category since no
corresponding sql type exists today
TOTAL_UTIL_STMTS
TOTAL_CATALOG_STMTS
TOTAL_OTHER_STMTS
TOTAL_INSERT_ERRORS
TOTAL_DELETE_ERRORS
TOTAL_UPDATE_ERRORS
TOTAL_SELECT_ERRORS
TOTAL_DDL_ERRORS
TOTAL_UTIL_ERRORS
TOTAL_CATALOG_ERRORS
TOTAL_OTHER_ERRORS
DELTA_DDL_STMTS
DELTA_UTIL_STMTS
DELTA_CATALOG_STMTS
DELTA_OTHER_STMTS
DELTA_INSERT_ERRORS
DELTA_DELETE_ERRORS
DELTA_UPDATE_ERRORS
DELTA_SELECT_ERRORS
DELTA_DDL_ERRORS
DELTA_UTIL_ERRORS
DELTA_CATALOG_ERRORS
DELTA_OTHER_ERRORS

Deleted:
AGGREGATION_START_UTC_TS

Updated:
DELTA_NUM_ROWS_IUD

Note: These columns were already added to the insert/update statements as
part of Anoop's earlier commit.

Packed explain plan to follow soon.

Change-Id: I268d1d24a8886ba1f0dc6181e1f0a65e53143fac

lou degenaro committed 1666559 (1 file)
Reviews: none

UIMA-4069 DUCC Job Driver (JD) system classpath

FlowController and TS moved to user.jar under org.apache.uima.ducc package

Zhijie Shen <zjshen@apache.org> committed 218dc38fdeb92a3a4ade30d17893c1cb06c2e711 (16 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Zhijie Shen <zjshen@apache.org> committed 670ee1a08c3224b399c5d6ae9bad8f53f08dc0a4 (16 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Zhijie Shen <zjshen@apache.org> committed 69749e5e865d44e03190151773e0fe3a590b6748 (16 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop feature-YARN-2928
Zhijie Shen <zjshen@apache.org> committed 7c8abec0a8fc8b10f57438c60b77f48dac679b68 (15 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop HADOOP-13128
Zhijie Shen <zjshen@apache.org> committed bcb211de8d200e44bce63f4edbcd2d2ebcd43541 (15 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Zhijie Shen <zjshen@apache.org> committed d1912606081f5c7942e95e587dd71aa255e3ecee (15 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Judy Zhao <hongxia.zhao@hp.com> committed 24a583149b22b7172a950f6608e22f3b545e8927 (2 files)
Reviews: none

Fix for bug #1409225
The changes include fixing for bug 1409225(same 1409228) and 1409227.
Bug 1409225,there are errors about unmatched quotes.
Bug 1409228,EXEC_END_UTC_TS is null for 'alter table' and some drop stmt.
They have same root cause--string ERROR_TEXT was not correctly terminated.
To fix bug 1409227,simply set METRIC_QUERY_TABLE as SUBMIT_UTS_TS=EXEC_START_UTC_TS for now.
Also include fix for another unmatched quotes case.

Change-Id: I41bded0f7a4911e4590e94991bfafbb3145e51d0

Hans Zeller <hans.zeller@hp.com> committed 67d5c06581bc6a61bc5bb6ea981ccae70c3463d5 (11 files)
Reviews: none

Log reading TMUDF, phase 3
blueprint cmp-tmudf-compile-time-interface

- Addressed review comments from phase 2. See
  https://review.trafodion.org/#/c/824
- Added a "parse_status" column to the TMUDF, see
  updated syntax below
- Added versioning info to new DLL libudr_predef.so
- EVENT_LOG_READER TMUDF now should choose the correct
  degree of parallelism without the need for CQDs
- Brought back the REPLICATE PARTITION keyword, which
  is used in the TMUDF syntax. This should fix the failure
  in regression test udf/TEST108.
- Some remaining issues:
   - Newlines in the error message are not handled well,
     at best the additional lines are lost, at worst
     they will cause parse errors
   - log_file_node output column is always 0
   - Code is not yet integrated with changes to event
     logging
   - Not yet tested on clusters

Updated syntax for the log reader TMUDF:

 SQL Syntax to invoke this function:

  select * from udf(event_log_reader( [options] ));

 The optional [options] argument is a character constant. The
 following options are supported:
  f: add file name output columns (see below)
  t: turn on tracing
  d: loop in the runtime code, to be able to attach a debugger
     (debug build only)
  p: force parallel execution on workstation environment with
     virtual nodes (debug build only)

 Returned columns:

 log_ts timestamp(6),
 severity char(10 bytes) character set utf8,
 component char(24 bytes) character set utf8,
 node_number integer,
 cpu integer,
 pin integer,
 process_name char(12 bytes) character set utf8,
 sql_code integer,
 query_id varchar(200 bytes) character set utf8,
 message varchar(4000 bytes) character set utf8

 if option "f" was specified, we have four more columns:

 log_file_node integer not null,
 log_file_name varchar(200 bytes) character set utf8 not null,
 log_file_line integer not null,
 parse_status char(2 bytes) character set utf8 not null

 (log_file_node, log_file_name, log_file_line) form a unique key
 in the result table. parse_status indicates whether there were
 any errors reading the information:
 ' ' (two blanks): no errors
 'E' (as first or second character): parse error
 'T' (as first or second character): truncation or over/underflow
                                      occurred
 'C' (as first or second character): character conversion error

Change-Id: Iee3fc8383d4125f0f9b6c6035aa90bb82ceee92e