A way to test this is to compare samples from one region with a reference from another region, which is distant enough not to give any correct matches at all.
Another way would be to use data of different times, e.g. compare samples from the last 2000 years with a reference that is older.
There are tools in CDendro for doing such tests rather easy.
For this example we have used blocks from 1338 samples taken from 773 Scotch pine trees in southeast Finland. (PISY = Pinus sylvestris. Samples are available at the ITRDB in
several files by Meriläinen, Lindholm, Timonen and others).
As the reference (the chronology) we have used a mean value ring width curve created out of the the ITRDB:mt112 collection (PIFL = Pinus flexilis, AD 470-1998 i.e 1529 years, Montana, USA by King, Waggoner & Graumlich).
In order to avoid blocks containing zero-rings and shorter overlap than stated, the minimum overlap should be set to the same value as the block length.
In CDendro you also have to uncheck all collection members which are shorter than the specified block length.
In order to get many blocks for testing you may prefer quite a short distance between the blocks.
Though, this will give a risk that tested blocks would look rather similar, and the same false hit will possibly be reported several times.
So, as a middle course, we prefer to set the block distance value at half of the block length value.
Helping context. In practical dendrochronological work you often have several samples from the same context,
which can support each other for a certain crossdating towards a reference, even if they taken one by one may give a little too low crossdating values.
When zero rings needed. When a zero ring is needed to make a sample crossdate properly, you have to be very suspicious when all rings look normal.
With e.g. pine samples it sometimes happen that rings are missing in parts of the ring-circuit although the surrounding rings look quite normal.
If a crosscut is available, then such incomplete rings can often be identified. When working with cores only, it is often impossible to see where there is a missing ring.
So when inserting a ring to make good correlation values, it is necessary to remember that such correlation values, to some extent are somewhat invented and have to be handled with
great suspicion.
To be sure about the correctness of inserting a ring, the blocks at one or each side of the inserted ring (or rings) should itself have values good enough to justify its dating.
The following test is therefore relevant only if compared with “clean” samples without any inserted rings (or excluded false rings) and without “helping context”.
80 years long blocks, starting at every 40 year gives some 3800 blocks to be tested with an overlap of 80 years.
The statistics here are given for seven normalization methods available within CDendro.
The analysis shows that the “safety level” has to be set a little bit higher for Baillie/Pilcher and possibly also for the Hollstein normalization than for
the other normalization methods. The "Mean of sliding frame algorithms" stands out as possibly allowing a lower "safety level" than the others.
Note: You have to be aware that all these methods except the Besancon Index E and P2YrsLimited methods, tend to get fooled by very narrow rings. So keep
an eye on the Besancon and the P2YrsLimited methods and also on the "Skeleton Chi2" value for any match. If the Chi2 value is low for a match, there is a big risk for a false match. See the
section How to get fooled by your normalization method and some too narrow ring widths.
Note: The lowest required TTest-value grows with shorter block lengths!
Number of best false matches grouped by TTEST value and normalization method
| Block length: 80 years |
Normalization method: |
Mean of sliding frames algorithm |
Besancon (no logarithm) |
Cross84 |
P2YrsLimited |
P2yrs |
Hollstein |
Baillie/Pilcher |
False TTest >= 7.0: |
0/3749 0% |
0/3749 0% |
0/3829 0% |
0/3810 0% |
0/3810 0% |
0/3810 0% |
0/3775 0% |
False TTest >= 6.5: |
0/3749 0% |
0/3749 0% |
1/3829 0.03% |
2/3810 0.05% |
1/3810 0.03% |
4/3810 0.1% |
3/3775 0.1% |
False TTest >= 6.0: |
0/3749 0% |
3/3749 0.08% |
3/3829 0.08% |
3/3810 0.08% |
4/3810 0.1% |
6/3810 0.16% |
12/3775 0.3% |
False TTest >= 5.5: |
7/3749 0.2% |
15/3749 0.4% |
16/3829 0.4% |
8/3810 0.2% |
16/3810 0.4% |
23/3810 0.6% |
33/3775 0.9% |
False TTest >= 5.0: |
29/3749 0.8% |
57/3749 1.5% |
56/3829 1.5% |
70/3810 2% |
90/3810 2.3% |
98/3810 2.6% |
143/3775 4% |
False TTest >= 4.5: |
164/3749 4.4% |
229/3749 6% |
258/3829 7% |
290/3810 8% |
333/3810 9% |
361/3810 10% |
521/3775 14% |
| Block length: 120 years |
Normalization method: |
Mean of sliding frames algorithm |
Besancon (no logarithm) |
Cross84 |
P2YrsLimited |
P2yrs |
Hollstein |
Baillie/Pilcher |
False TTest >= 7.0: |
0/1798 0% |
0/1798 0% |
0/1853 0% |
0/1834 0% |
0/1834 0% |
0/1834 0% |
0/1815 0% |
False TTest >= 6.5: |
0/1798 0% |
0/1798 0% |
1/1853 0.05% |
0/1834 0% |
0/1834 0% |
1/1834 0.05% |
0/1815 0% |
False TTest >= 6.0: |
0/1798 0% |
0/1798 0% |
2/1853 0.1% |
0/1834 0% |
1/1834 0.05% |
4/1834 0.2% |
5/1815 0.3% |
False TTest >= 5.5: |
1/1798 0.06% |
1/1798 0.06% |
5/1853 0.3% |
4/1834 0.2% |
4/1834 0.2% |
6/1834 0.3% |
10/1815 0.6% |
False TTest >= 5.0: |
8/1798 0.4% |
11/1798 0.6% |
17/1853 0.9% |
19/1834 1% |
21/1834 1% |
27/1834 1.5% |
50/1815 3% |
False TTest >= 4.5: |
44/1798 2.4% |
88/1798 5% |
86/1853 5% |
93/1834 5% |
117/1834 6% |
123/1834 7% |
193/1815 11% |
How to read the tables: See the first table above: When testing some 3800 blocks of 80 years length towards an uncorrelated reference of 1528 years, no block had a best match above TTest=7.0
The rightmost cell of the "TTest>=5.0 line" tells that when using Baillie/Pilcher normalization, as many as 4% of the blocks had a best TTest >= 5.0, which all were erroneous matches, i.e. "False TTest".
|
The block length has a strong impact on the risk level of false high TTest-values:
Number of best false matches grouped by TTEST value and block length
Normalization method: Proportion of last 2 years growth (Prop 2 yrs) |
Block length: | 50 years | 60 years | 80 years | 120 years |
False TTest >=7.0: | 0/6989 0% | 0/5564 0% | 0/3810 0% | 0/1834 0% |
False TTest >=6.5: | 3/6989 0.04% | 4/5564 0.07% | 1/3810 0.03% | 0/1834 0% |
False TTest >=6.0: | 19/6989 0.3% | 13/5564 0.2% | 4/3810 0.1% | 1/1834 0.05% |
False TTest >=5.5: | 76/6989 1.1% | 37/5564 0.7% | 16/3810 0.4% | 4/1834 0.2% |
False TTest >=5.0: | 266/6989 4% | 151/5564 2.7% | 90/3810 2.4% | 21/1834 1.1% |
False TTest >=4.5: | 961/6989 14% | 647/5564 12% | 333/3810 9% | 117/1834 6% |
Normalization method: Besancon index E - No logarithm
|
Block length: | 50 years | 60 years | 80 years | 120 years |
False TTest >=7.0: | 0/6842 0% | 1/5477 0.02% | 0/3749 0% | 0/1798 0% |
False TTest >=6.5: | 1/6842 0.01% | 1/5477 0.02% | 0/3749 0% | 0/1798 0% |
False TTest >=6.0: | 10/6842 0.15% | 6/5477 0.1% | 3/3749 0.08% | 0/1798 0% |
False TTest >=5.5: | 43/6842 0.6% | 22/5477 0.4% | 15/3749 0.4% | 1/1798 0.06% |
False TTest >=5.0: | 177/6842 2.6% | 100/5477 1.8% | 57/3749 1.5% | 11/1798 0.6% |
False TTest >=4.5: | 704/6842 10% | 450/5477 8% | 229/3749 6% | 88/1798 5% |
Note: The Besancon algorithm has two optional logarithmic modes, standard eLog and the "Besancon logarithm" defined as
normV = if (normV>0) then log(normV) else -log(-normV)
The numbers of erroneous matches found in the "Besancon (no logarithm)" case with block length 80 were 0/0/3/15/57/229 (see the first table above).
For the eLog variant the numbers are: 0/0/2/15/61/257
For the BesLog variant the numbers are: 0/1/1/10/57/278
I.e. about the same result with all three variants.
|
With CDendro the normalized mean value reference - created from a collection - can be made in two different ways:
either as a mean value curve from all the normalized ring width curves of the collection
or as a normalized curve made from the mean value ring width curve (a .wid-file) created from all the detrended ring width curves of the collection.
Tests have been run with both types of reference curves. No significant differences have been found between the two methods for this case
where we are only looking for false matches.
The tables above show the numbers from the case with a mean value curve used as a reference.
(Created out of the ITRDB:mt112 collection by CDendro using a "heavy detrend" mechanism for detrending, i.e. divide each ring width with the mean of 14 surrounding rings).
The final results varies somewhat depending on how the reference is created or what reference is used.
Anyhow the overall results stay about the same for all the ways we have tested.
Files used for the tests above
|