Source changes - FishEye

Shows the 20 most recent commits for Traffic Server.

Zoran Regvart <zregvart@apache.org> committed 05ee08956a92b50177c41df3bf2db8b63f0db98c (40 files)
Reviews: none

CAMEL-11561: Cleanup Salesforce integration tes...
...ts setup

This removes unused profiles that might conflict when running the
Ant Migration Tool and defines Connectec App OAuth Client Id and Client
Secret from the `test-salesforce-login.properties`. So it is up to the
user of the tests to pick a unique Client Id, i.e. simply by creating an
Connected App and using those values and then deleting it.

camel-git feature/maven-wrapper
Anoop Sharma <anoop.sharma@esgyn.com> committed 5a244d532425a6a4aff19a5cd2cd6ab637ea9b0a (20 files)
Reviews: none

Few fixes, details listed below.
-- fix an issue where multiple values inserted from a list would return
  error but each value inserted on its own would succeed.
  ex: create table ts (a timestamp);
      insert into ts values ('2017-01-01 10:10:10'), ('2018-01-01 10:10:10');

-- sometimes errors returned from child during hive inserts were not
   being returned. That has been fixed.

-- TRAFODION-2683 extension.
   added a 'p' (prune) option which would cleanse and filter unneeded
   explain output. This helps in reducing output especially
   for larger explains.
Ex:
>>explain option 'p' select * from dual;

------------------------------------------------------ PLAN SUMMARY
STATEMENT_NAME ........... NOT NAMED
STATEMENT ................ select * from dual;

------------------------------------------------------- NODE LISTING
ROOT ====================================== SEQ_NO 2 ONLY CHILD 1
DESCRIPTION
  fragment_id ............ 0
  parent_frag ............ (none)
  fragment_type .......... master
  xn_access_mode ......... read_only
  auto_query_retry ....... enabled
  embedded_arkcmp ........ used
  select_list ............ %(0)
  input_variables ........ %(0), %(0), %(0)

VALUES ==================================== SEQ_NO 1 NO CHILDREN
DESCRIPTION
  fragment_id ............ 0
  parent_frag ............ (none)
  fragment_type .......... master
  tuple_expr ............. %(0)

--- SQL operation complete.
>>

Christopher Collins <ccollins@apache.org> committed e1aa9abfe5f6219b5f9268a2599f37432be60ba1 (1 file)
Reviews: none

MYNEWT-568 log - newtmgr logs show; idx before ts.
Now that the index is the "primary key," it comes earlier in the
argument list than the optional timestamp.

Roberta Marton <rmarton@edev07.esgyn.local> committed 913d2337e029a0f904539a1d9d6ea064f90aa6ab (2 files)
Reviews: none

[TRAFODION-2301]: Hadoop crash with logs TMUDF
Today the UDF event_log_reader scans all logs, loads events into memory and
then discards the rows that are not needed. Waiting until the end to discard
rows takes too much memory and causes system issues.

The immediate solution is to use predicate pushdown; that is, specify predicates
on the query using the event_log_reader UDF to limit the scope of the data flow.
These predicates will be pushed into the UDF so the UDF only returns the
required rows instead of all the rows. Initially only comparison predicates are
pushed down to the event_log_reader UDF.

In addition to predicate pushdown, a new option has been added to the
event_log_reader UDF - the 's' (statistics) option. This option reports how
many log files were accessed, how many records were read, and how many records
were returned. By specifying timestamp ranges, severity types, sql_codes, and
the like, the number of returned rows can be reduced.

Example output:

Prior to change:

select count(*) from udf(event_log_reader('s'))
  where severity = 'INFO' and
        log_ts between '2016-10-18 00:00:00' and '2016-10-18 22:22:22';

(16497) EVENT_LOG_READER results:
          number log files opened: 113, number log files read: 113,
          number rows read: 2820, number rows returned: 2736

After change:

select count(*) from udf(event_log_reader('s'))
  where severity = 'INFO' and
  log_ts between '2016-10-18 00:00:00' and '2016-10-18 22:22:22';

(17046) EVENT_LOG_READER results:
          number log files opened: 115, number log files read: 115,
          number rows read: 2823, number rows returned: 109

Marshall Schor committed 1759859 (1 file)
Reviews: none

[UIMA-4674] [UIMA-4685] allow for reuse of ts after commit, catch up merge to return proper SerialFormat

Peter Klügl committed 1754092 (1 file)
Andrea Cosentino <ancosen@gmail.com> committed 6f528956ac22c20b3be6dfd3587765624bbcaeb4 (1 file)
Reviews: none

Upgrade cxf-xjc-ts to version 3.0.5

Stefan Fuhrmann committed 1679859 (1 file)
Reviews: none

On the 1.10-cache-improvements branch:
Instead of carefully limiting the key sizes and checking those limits,
make all length fields in entry_t and entry_key_t SIZE_Ts. This saves
a number of down-/shortening casts as well as key-length limiter code.
The limit for what item size we will actually cache stays in place.

On the downside, we each entry bucket (entry_group_t) can hold only 7
entries now - down from 10 in /trunk. This is due to added fields to
and enlarging fields in the entry_t struct.

This practically undoes r1679679 and r1679687.

* subversion/libsvn_subr/cache-membuffer.c
  (entry_key_t): Extend KEY_LEN to size_t and reorder members to give them
                 natural alignment.
  (entry_t): Extend item SIZE element to size_t.
  (membuffer_cache_set_internal,
   membuffer_cache_set_partial_internal): Remove obsolete shortening casts.
  (combine_long_key,
   svn_cache__create_membuffer_cache): Use size_t with all lengths. Drop key
                                       length limiter code and conversions.

Tharaknath Capirala <capirala.tharaknath@hp.com> committed a2975c28b7ae72d8013b9a29d8fbbaefa1ef0434 (6 files)
Reviews: none

New and updated repository columns
Repository column changes...

"_REPOS_".METRIC_QUERY_TABLE

Added:
QUERY_STATUS
QUERY_SUB_STATUS --> for future use

----------------------------------------------------

"_REPOS_".METRIC_QUERY_AGGR_TABLE;

Added:
SESSION_START_UTC_TS
AGGREGATION_LAST_UPDATE_UTC_TS
AGGREGATION_LAST_ELAPSED_TIME
TOTAL_DDL_STMTS --> Falls under OTHER category since no
corresponding sql type exists today
TOTAL_UTIL_STMTS
TOTAL_CATALOG_STMTS
TOTAL_OTHER_STMTS
TOTAL_INSERT_ERRORS
TOTAL_DELETE_ERRORS
TOTAL_UPDATE_ERRORS
TOTAL_SELECT_ERRORS
TOTAL_DDL_ERRORS
TOTAL_UTIL_ERRORS
TOTAL_CATALOG_ERRORS
TOTAL_OTHER_ERRORS
DELTA_DDL_STMTS
DELTA_UTIL_STMTS
DELTA_CATALOG_STMTS
DELTA_OTHER_STMTS
DELTA_INSERT_ERRORS
DELTA_DELETE_ERRORS
DELTA_UPDATE_ERRORS
DELTA_SELECT_ERRORS
DELTA_DDL_ERRORS
DELTA_UTIL_ERRORS
DELTA_CATALOG_ERRORS
DELTA_OTHER_ERRORS

Deleted:
AGGREGATION_START_UTC_TS

Updated:
DELTA_NUM_ROWS_IUD

Note: These columns were already added to the insert/update statements as
part of Anoop's earlier commit.

Packed explain plan to follow soon.

Change-Id: I268d1d24a8886ba1f0dc6181e1f0a65e53143fac

lou degenaro committed 1666559 (1 file)
Reviews: none

UIMA-4069 DUCC Job Driver (JD) system classpath

FlowController and TS moved to user.jar under org.apache.uima.ducc package

Zhijie Shen <zjshen@apache.org> committed 218dc38fdeb92a3a4ade30d17893c1cb06c2e711 (16 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Zhijie Shen <zjshen@apache.org> committed 670ee1a08c3224b399c5d6ae9bad8f53f08dc0a4 (16 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Zhijie Shen <zjshen@apache.org> committed 69749e5e865d44e03190151773e0fe3a590b6748 (16 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop feature-YARN-2928
Zhijie Shen <zjshen@apache.org> committed 7c8abec0a8fc8b10f57438c60b77f48dac679b68 (15 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop HADOOP-13128
Zhijie Shen <zjshen@apache.org> committed bcb211de8d200e44bce63f4edbcd2d2ebcd43541 (15 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop
Zhijie Shen <zjshen@apache.org> committed d1912606081f5c7942e95e587dd71aa255e3ecee (15 files)
Reviews: none

YARN-3030. Set up TS aggregator with basic request serving structure and lifecycle. Contributed by Sangjin Lee.
(cherry picked from commit f26941b39028ac30c77547e2be2d657bb5bf044a)

hadoop