IgorC
Administrator
Posts: 41
|
Post by IgorC on Dec 28, 2013 22:13:42 GMT
Hi, Guys  It's time to discuss an upcoming listening test. It will be a multiformat test at 96 kbps as a logical continuation to the last public AAC listening test from 2011 year.
|
|
|
Post by Steve Forte Rio on Dec 28, 2013 22:16:21 GMT
Nice. So what samples will we use in our test? I mean where will we get a set of samples for further selection and sorting. Should we combine firstly all samples that were used in previous test?
|
|
IgorC
Administrator
Posts: 41
|
Post by IgorC on Dec 29, 2013 14:40:11 GMT
Hello Everybody.
Let's continute discussion here. Right now we're on selection of samples. Express your suggestions. You can copy&paste your previous posts of Hydrogenaudio.
|
|
|
Post by kennedyb4 on Dec 29, 2013 15:16:56 GMT
I would like to suggest that the balance of the samples be difficult or "killer" samples. The last test at 96kbps was with difficult samples and it was quite difficult for me to pick up many artifacts.
|
|
|
Post by darkbyte on Dec 29, 2013 16:14:16 GMT
I agree with kennedyb4. The non low anchor codecs are already mature enough so everyday music will be possibly not a problem for them at 96kbps in most cases. I think choosing from those problematic stuff which are already uploaded on Hydrogenaudio would be good. (I'm especially interested how Opus improved (or not) from 1.0 to 1.1. There's a couple samples for this on Hydrogenaudio.)
|
|
|
Post by Steve Forte Rio on Dec 29, 2013 16:26:46 GMT
kennedyb4, of course. After gathering enough number of sample we'll make a selection of really hard-to-encode samples (before sorting).
But selection will be done basing on experience only, cause we mustn't target on any particular codec, so we're not allowed to make a test encoding during the selection. Also we must refrain of choosing samples that have known problems with particular encoders, cause they will directly decrease resultant quality level for this encoder. Even if final sample selection will be completely random, this samples will affect the results.
We must not look on samples from the side of problems with some particular encoders, in no case.
|
|
|
Post by kamedo2 on Dec 30, 2013 2:38:21 GMT
I still think we should test clipping as well, using reference decoders. I don't think many people are even aware of the clipping problem when encoding and decoding. As for deviation, it can be solved by using the reference decoders at the default settings. Modern encoders pay moderate attention on how to avoid clipping as well. And we test modern encoders near 100k. It should be fairly hard, and it might be little easier with clipping.
|
|
|
Post by Gecko on Dec 31, 2013 12:32:02 GMT
I think we should have a dubstep sample. Unfortunately, thus far I was unable to find a suitable lossless source. The tracks found on audiojelly.com are almost all 320kbps MP3s. The few available WAVs I could see are not very "dubstep-y" imo.
|
|
IgorC
Administrator
Posts: 41
|
Post by IgorC on Dec 31, 2013 14:28:26 GMT
|
|
|
Post by Steve Forte Rio on Jan 2, 2014 13:09:41 GMT
I still think we should test clipping as well, using reference decoders. I don't think many people are even aware of the clipping problem when encoding and decoding. As for deviation, it can be solved by using the reference decoders at the default settings. Modern encoders pay moderate attention on how to avoid clipping as well. And we test modern encoders near 100k. It should be fairly hard, and it might be little easier with clipping. I suggest vote. And myself still insist on separate analysis of clipping influence. I suppose it couldn't be included in our current test (it would noticeably increase testing complexity).
|
|
IgorC
Administrator
Posts: 41
|
Post by IgorC on Jan 2, 2014 15:45:21 GMT
kamedo2, Steve Forte Rio.
Correct me if I wrong. It's a real-life scenario in both cases whether a user takes care of clipping or don't. So both cases are fine and prevention of clipping isn't a critical condition for a test.
|
|
|
Post by kamedo2 on Jan 2, 2014 17:15:42 GMT
IgorC, the difference between with and without the clipping should be rather minor. If we decide not to test the clipping, we must decode the lossy file to float wav, and the float wav must be attenuated to give the final wav for the Java ABC/HR. I have a program to streamline the process.
|
|
IgorC
Administrator
Posts: 41
|
Post by IgorC on Jan 2, 2014 17:39:30 GMT
OK. Now about a number of samples. It should be a high number. 40-50. Most of us here on the same page.
|
|
|
Post by Steve Forte Rio on Jan 2, 2014 18:54:03 GMT
kamedo2, Steve Forte Rio.Correct me if I wrong. It's a real-life scenario in both cases whether a user takes care of clipping or don't. So both cases are fine and prevention of clipping isn't a critical condition for a test. Yes, we have two scenarios, depending on source material and playback device/software/settings. kanedo2, I think I will do normalization myself. IgorC, may we use foobar2000 for decoding?
|
|
IgorC
Administrator
Posts: 41
|
Post by IgorC on Jan 2, 2014 19:29:18 GMT
IgorC, may we use foobar2000 for decoding? If it's necesary, yes, we can. Anyway what's with a native decoders?
|
|