milotrain

milotrain t1_j8noplw wrote

You should be EQing things, I'm not saying you shouldn't. We can disagree about the details, that's ok but microphones and speakers are the same devices with the same limitations.

Is the CIE 1931 color space chromaticity diagram the entirety of a projected image?

−1

milotrain t1_j8m8s7d wrote

>But the question of OP was if you can tune suboptimal frequency curves to match a "known good" curve.

That's not how I read it. "If we can tune a headphone to a harmon target, why can't we use the same device to make a crappy headphone sound like a great one." is (to me) a statement not about making a crappy headphone match the harmon target but to make a crappy headphone sound like a good headphone. Subtle but different.

I was using the tweeter/sub comparison as an extreme example. The fact is that EQ isn't free, there are phase shifts at EQ points, and extreme EQ moves (especially bell curves with tight Qs) produce artifacts at their limits. This is common knowledge when talking to people who EQ rooms for a living, one of the reasons we are going to woven projection screens is that there is less EQ that needs to be applied to a speaker array to make up for the transmission through the screen than needs to be applied for acoustic perf.

This is also why even with great examples like the UA Audio Sphere you can't exactly match all microphones. And to be clear, in that comparison you are using a great headphone to match all other headphones including crappy ones, not a crappy headphone to match a great one.

So yes, the analogy was limited but it still suggests what's going on: Firstly that there are things not in a FR plot that are acoustically important, and Secondly that it's not as simple as using an EQ to make one curve match another, because in some cases the sonic information isn't there to be boosted, and in some cases doing so to the degree needed creates other problems that can not be ignored (or fixed).

Technically this statement is no different than "can I EQ a crappy microphone to sound like a great microphone?" and everyone has already tried this. It's constantly being tested and attempted because it represents such a potential change in the recording industry. No one has got there, and there is a huge economic incentive to get there, much more than EQing headphones.

4

milotrain t1_j8kuwrf wrote

Because you can't add (with EQ) what's not there. Remember, a FR plot is not the entirety of a sound signal, it's just one way to measure some of a sound signal.

(can't I EQ a tweeter to sound like a sub? I mean it's just frequency right?)

I love how disliked this take is, especially by people who don't do things with sound for a living. Talk to any acoustician, audio engineer, mixer, etc and they all understand my point as if it's common knowledge.

10

milotrain t1_j6j06u5 wrote

Why are people still buying Schiit stuff? This happens all the time.

Yes get the Atom, or if you want to spend money and have a nice knob get the element, or get a Grace m900 (Fuc*ing delightful), or a Topping DX3 Pro (if you want simple, single box on the $200 tier).

2

milotrain t1_ixwcui3 wrote

Everyone does at some point. It's just like having a dominant eye. If I focus correctly then my imbalance goes away, if I focus wrong it gets fairly extreme. Makes me chase IEM wax issues all the time. Annoying but it is what it is, just use an inline channel specific EQ if you want to trim it out as much as you can.

4

milotrain t1_ivcs7kp wrote

Yes, and yes. There are limitations, low frequency isn't perfect (obviously it's not moving the air that big drivers are, so you don't "feel" the bass the same). And you can't EQ a headphone to do "everything" as all headphones have a limitation but the HD800s are pretty close. Smyth also recommends STAX.

Also, if this isn't obvious, it does all of this at a 7.1.2 soundfield, so you can feed it a dolby atmos mix (or DTS:X) and you hear the full surround mix, with location accuracy. There is also a head tracking feature so you can turn your head and the "room" stays where it is.

5

milotrain t1_ivco1f7 wrote

The short story is that you stick microphones in your ears, and it tones out a room, then you put on headphones (without taking the microphones out) and it tones out the cans. It then does a fairly complex FFT to make the two mach with an incoming signal. It has a testing mode where you are supposed to guess if you have headphones on or if it's the speakers playing, for the first 15min or so of this test everyone I know has gotten it wrong at least 50% of the time, it's that good.

4

milotrain t1_ivbcw0z wrote

Depends on what my goal is. Enjoyment? Sure. Reference? Almost never.

As far as function, it's hard to do much better in a cost/performance model than the Etymotic ER4XR with comply tips. Especially when they go on one of their sub $200 sales.

But... compared to my reference speaker system I've not found a headphone system that can match, and that's troublesome because I need to work on headphones sometimes to match into the stage. Closest I can get there is currently HD800s + Smyth A16, but they are shit for isolation. Working on tuning in a pair of Aeon 2 Noirs with the Smyth but I'm not sure if it's going to work out.

18

milotrain t1_is1qs52 wrote

You should see an audiologist. However, just to allay some of your concern, you may not actually have damage, you may just be sensitive and noticing difference that for the most part was already there. The wet boot thing is obviously something that's new, but the fact that you hear differently between the left and right is typical of everyone.

If I focus on it my L & R sound very different, but if I don't then everything is fine and sounds normal. It's like having a dominant eye, when you care about it and think about it the dominance is huge, when you don't focus on it you just go about your daily.

14