With over a billion facts, performance is a challenge in HERON.

See also GroupOnly/OracleMeetingNov, attachment:HERON_Performance.pptx

See also HeronSizingGrowth#ColumnStoreOptions.

Performance tickets

Ticket Type Summary Resolution Priority Status Owner
#554 design-issue Performance issue in the IDX load fixed major closed mhoag
#1613 defect Explosion of diagnosis codes leads to poor performance fixed major closed ngraham
#2132 design-issue Improve efficiency of term_tree creation for enhance_concepts fixed major closed mhoag
#2285 defect HERON performance page doesn't include timeline queries fixed major closed dconnolly
#2313 problem Navigating HERON term hierarchy is slow for users with REDCap access fixed major closed badagarla
#2394 design-issue Creating tooltips takes from seconds to hours depending on table statistics fixed major closed ngraham

Performance testing

from our 10 Feb meeting; Nathan, is this obsolete?

  case when compare.factor < 0.75 then 'FASTER'
  when compare.factor > 1.25 then 'SLOWER'
  end summary,
from (
select agg.querydesc, mx, agg.t agg_t, bit.t eval_t,
   case when agg.t != 0 then round(bit.t/agg.t, 2)
   else null end factor, agg.queryname
from (
select round(avg(testduration), 2) t, round(max(testduration), 2) mx, queryname, querydesc
from ngraham.timingtests
group by queryname, querydesc
) agg
join (
select round(testduration, 2) t, queryname
from ngraham.timingtests
where testrundesc='With bitmap indexes.'
) bit on bit.queryname = agg.queryname
) compare
order by factor;
  • develop SQL scripts to characterize performance that our users are seeing (#1069)

Adding instance_num to observation_fact pk in 1.6

  • instance num or not in indexes on blueherondata? (#858). Agreed (10 Feb 2012): stick with vanilla 1.6 for clinton release, to get a baseline

Hardware contention, tuning

  • note: test app server and prod app server are connected to different Oracle instances, but the instances are on the same host, so there are possible interactions.
  • Oracle CPU tuning?
    • Do we have Oracle support?
      • yes. We could ask them to look at our tuning
      • mnair to look into memory/cpu tuning (#865)
      • mnair to open the communication channel with Oracle support

Techniques to try, be aware of

Star Transformations

see OracleTips#star-tx

Range Queries/Bind Variables

advice from Brandon:

Bind variables are usually a good thing especially when you've got lots of unique queries and the overhead of parsing adds up so it puts a burden on the CPU. When you've got fewer longer running queries like in a DW then they may not be a good thing. Reason is the optimizer doesn't have access to the values from the query to base it's decisions on. So in a range scan the statistics should tell it the max and min values for a given column in a table. If it has the values (not using band vars) then it should be able to tell if the query is searching in the middle of the possible values (full table scan probably best) or if it is towards the beginning/end of the possible range (index scan best). There are also histograms you can use which can give the optimizer better info on the layout of information in a column but they also require you not to use bind variables. I also think you can use fast index scans for range values. Basically pulls the index in larger chunks from the disk.

"Supercharging i2b2"

Bhargav went to this presentation about skipping the hive middle tier and just using (T-SQL) stored procedures (among other optimizations), resulting in up to 10x performance boost:

in procedings of:


with 5k patients, it was fine.

after loading from Epic, life got hard; esp lab value lt/gt X (#418)

then we got solid state

see also:

  • solid state storage notes (blog item, #474)

Sad performance April 2012

  • Here's a query to look at poor system performance as of late
  • We also noticed several run away processes hitting Oracle that we killed this afternoon.
select avg((qqri.end_date - qqri.start_date)*24*60*60) as elapsedsecondsavg, to_char(qqm.create_date, 'YYYY-MM') ,count(qqm.query_master_id) as num_queries, avg(qqri.real_set_size) as avg_set_size
from BLUEHERONDATA.qt_query_master qqm 
join BLUEHERONDATA.qt_query_instance qqi
on qqm.query_master_id= qqi.query_master_id
join BLUEHERONDATA.qt_query_result_instance qqri
on qqi.query_instance_id = qqri.query_instance_id
group by to_char(qqm.create_date, 'YYYY-MM') 
order by to_char(qqm.create_date, 'YYYY-MM') desc

Our upgrade to 1.6 was late December. Then it seems Feb, March, Apr of 2012 have been significantly worse. Some horrible performance this past month.

We would want to add this kind of monitoring and possibly stratify by depth of the query and whether it included conjunctions with numeric "greater than" "less than" criteria as well as "occurs" and "date threshold" criteria.

Last modified 7 years ago Last modified on Jan 28, 2014 1:38:17 PM