What no one will tell you about HDTV...

How many techs have been told that the picture was better after re-pointing a dish?


  • Total voters
    187
A bit-error-rate test at the receiver compared with the original signal would be the deciding factor. I doubt that any of us is equipped to carry out such a test.

Those who believe that no errors exist in digital TV signals must be smoking something. Of course there are errors. When they reach a certain level, the receiver will shut down. The bit error rate rises as the signal strength decreases until an unacceptable threshold is reached. That threshold is high enough and bad data is spread widely enough in the picture to go unnoticed by viewers.

Live TV doesn’t allow for error checking and retransmission the way sending data over the Internet does. The erroneous data is simply used as long as the error threshold hasn’t been reached.

Any time an airplane flies through the line-of-sight of the dish-to-satellite link, receivers should be shutting down all over the neighborhood. They don’t because the systems accept a certain amount of bad data.
 
"That threshold is high enough and bad data is spread widely enough in the picture to go unnoticed by viewers." - you should start study MPEG-2 or H.264 compression algos.
No such "spread widely enough" processing designed there.
 
You know, what he is sayiong is true, but it is such a small difference that it is probably barely perceptible.

As signal strength drops, errors in the signal naturally increase, and the digital stream has to have more "interpretation" done to it to produce the picture.

The threshold in the ATSC specification is high enough that you only see major picture degradation just before the decoder in your tuner shuts theoutput off.

If the threshold had been set lower, then you could still see the macroblocking and misalignment of image segments all over the screen that are sometimes seen in DXed ATSC signals.

While he has a point it is a very small one and not worth so much energy. He has read some articles about fuzzy vs crystal clear digital pictures and swallowed the hook, line and sinker.

I have had (still have on 129) weak satellite signals and strong satellite signals and I can't tell any difference on my Hitachi 57F59 until the picture begins to macroblock, but then again my Hitachi only resolves 1280X1080, perhaps if I had a 1920X1080p display I might be able to grouse about the picture fuzzing up just before it drops out.

My 1080X1280 picture still has WOW and looks like I'm right there looking through a window even on some of the weaker channels on 129.

Tempest in a teapot!
 
There is such a thing as FEC (Forward Error Correction). What FEC does is correct any lost data. Without FEC a digital picture could leave out lost pixels and those lost pixels would just have the background color ( a digital "grain" if you will). However, FEC replaces lost pixels using a generic data set for a given range of pixels. In a 1920x1080 frame, you have 2,073,600 pixels. Any one of those can be lost because of transmission problems. In a 2-way world like the internet, the receiving computer would just tell the host that a packet was lost and needed to be re-transmitted.

With TV, there is no time for a satellite receiver or digital cable box to report back that some data was lost and needs to be re-transmitted. So, to overcome this, there is a stream of data that provides a generic means of correcting those lost bits. If you are looking at a weather reporter in a red jacket and one of the pixels where his red jacket is were to be lost, the FEC would probably have a generic shade of red to fill in for all the pixels in that portion of the TV screen.

In all reality, the average viewer isn't going to notice an error-corrected pixel or two. Hell, they wouldn't probably notice a few thousand. In any case, too many of these and the picture loses the sharpness that the whole HD realm was intended to provide. I've seen positive signal locks in the 80s and higher but that had unimpressive pictures. Many times it was a kinked fitting or something else screwing with the sync rates, causing data to be delivered with poor timing. The BER tests were good at the dish but not good at the receiver.

Back when I was a cable tech, I would go on service calls of their boxes and the Scientific-Atlanta boxes all had a diagnostics screen where we could see the error counts. It was displayed in red right next to the signal levels which were usually white with an acceptable +3db. I often found it was either amped too late in the distribution or a kinked wire or a bad fitting somewhere. The same principle applied here. Good signal levels, bad error counts.

If you look at the picture below, the box is getting a strong signal (too strong actually) and it has an unacceptable error count of 13/sec.


t2000_01.jpg
 
You did wrong assumptions:
- correction of wrong bits doing exactly restore those bits;
- non-corrected bits in packet will issued discard that packet
- video stream consist static and incremental frames ( see MPEG-2/H.264 standard)
- one packet is less then 200 bytes length, so I-frame usually 30...60 KB - do you math
- decompression algos works with a sequence I-B-P frames

Practically, this discussion without digging down into MPEG-2/4 algos wouldn't finish here.

In reality, the postulate about creating fuzzy,blurry, grainy, etc pictures is wrong.
Real result - macro blocking, green spots (as default color) instead of missing frames - not PIXELS (!) or full blackout.
 
You did wrong assumptions:
- correction of wrong bits doing exactly restore those bits;
- non-corrected bits in packet will issued discard that packet
- video stream consist static and incremental frames ( see MPEG-2/H.264 standard)
- one packet is less then 200 bytes length, so I-frame usually 30...60 KB - do you math
- decompression algos works with a sequence I-B-P frames

Practically, this discussion without digging down into MPEG-2/4 algos wouldn't finish here.

In reality, the postulate about creating fuzzy,blurry, grainy, etc pictures is wrong.
Real result - macro blocking, green spots (as default color) instead of missing frames - not PIXELS (!) or full blackout.

1. I didn't assume anything. What I wrote is all referenced.
2. I didn't postulate that anything happens. I write about real world observations, problems and solutions.
3. If you've got a point here, where are your references and what is the point?
 
QEF and FP are based on FEC abilty

There is such a thing as FEC (Forward Error Correction). What FEC does is correct any lost data. Without FEC a digital picture could leave out lost pixels and those lost pixels would just have the background color ( a digital "grain" if you will). However, FEC replaces lost pixels using a generic data set for a given range of pixels. In a 1920x1080 frame, you have 2,073,600 pixels. Any one of those can be lost because of transmission problems. In a 2-way world like the internet, the receiving computer would just tell the host that a packet was lost and needed to be re-transmitted.

With TV, there is no time for a satellite receiver or digital cable box to report back that some data was lost and needs to be re-transmitted. So, to overcome this, there is a stream of data that provides a generic means of correcting those lost bits. If you are looking at a weather reporter in a red jacket and one of the pixels where his red jacket is were to be lost, the FEC would probably have a generic shade of red to fill in for all the pixels in that portion of the TV screen.

In all reality, the average viewer isn't going to notice an error-corrected pixel or two. Hell, they wouldn't probably notice a few thousand. In any case, too many of these and the picture loses the sharpness that the whole HD realm was intended to provide. I've seen positive signal locks in the 80s and higher but that had unimpressive pictures. Many times it was a kinked fitting or something else screwing with the sync rates, causing data to be delivered with poor timing. The BER tests were good at the dish but not good at the receiver.

Back when I was a cable tech, I would go on service calls of their boxes and the Scientific-Atlanta boxes all had a diagnostics screen where we could see the error counts. It was displayed in red right next to the signal levels which were usually white with an acceptable +3db. I often found it was either amped too late in the distribution or a kinked wire or a bad fitting somewhere. The same principle applied here. Good signal levels, bad error counts.

If you look at the picture below, the box is getting a strong signal (too strong actually) and it has an unacceptable error count of 13/sec.


t2000_01.jpg

Yes, and the threshold of vision (TOV) a.k.a. failure point (FP), and the quasi - error free (QEF) condition are determined by the ability of the forward error correction (FEC) to compensate for error.
 
I think the attached figure (if it actually got attached:))might be helpful in this discussion. The figure explains:

The BER varies with the Signal-to-Noise ratio, and thus the signal strength, and any external noise. The graph uses "Eb/N0", which is similar to SNR, but is normalized to take out differences in modulation due to bandwidth (e.g. QPSK needs twice the SNR of BPSK, but because it uses 1/2 the bandwidth, you get twice the SNR automatically). Please note that the receive antenna gain is quite large, so terrestrial noise is generally attenuated a lot. That's not to say that their isn't any terrestrial interference--just that it's attenuated a lot by the receive antenna.

It takes a Eb/N0 or SNR range of about 7 dB to vary the BER from almost perfect (1E-12) to unwatachable (1E-3). By the way, I have heard that the BER for Quasi-Error-Free is about 1E-10.

The Viterbi or "outer" error correction (or FEC). reduces the signal range for that same range of BER to about 4.5 dB.

The Reed-Solomon or "inner" FEC, when combined with the Viterbi FEC, reduces the range for the SNR to about 1 dB.

That 1 dB is the "cliff" that everyone talks about. So the cliff isn't a "brick wall", but it is very, very steep.

One other piece of data that would be very helpful is the scaling of the signal strength meter in the receivers--if it's linear or logarithmic. I think it's probably logarithmic, with an expanded scale; perhaps a change of 10 is 1 dB. Given that information & the BER curves, and perhaps some information about how much the signal strength & noise vary, we can see how much margin is really needed to ensure that we are always above the QEF threshold.

One other point--the discussion about QEF assumes that all bit errors are created equal. If you look at how MPEG works, this isn't really the case. Some bits are fairly minor, such as a high-frequency element in a macroblock in the corner. Or perhaps the bit error is another program on the same transponder! Other bits are much more important, such as header information. But at a BER of 1E-10, you aren't going to see much in any case.
 

Attachments

  • BER curves.JPG
    BER curves.JPG
    259.8 KB · Views: 102
Missed it again Jim5506

You know, what he is sayiong is true, but it is such a small difference that it is probably barely perceptible.

As signal strength drops, errors in the signal naturally increase, and the digital stream has to have more "interpretation" done to it to produce the picture.

The threshold in the ATSC specification is high enough that you only see major picture degradation just before the decoder in your tuner shuts theoutput off.

If the threshold had been set lower, then you could still see the macroblocking and misalignment of image segments all over the screen that are sometimes seen in DXed ATSC signals.

While he has a point it is a very small one and not worth so much energy. He has read some articles about fuzzy vs crystal clear digital pictures and swallowed the hook, line and sinker.

I have had (still have on 129) weak satellite signals and strong satellite signals and I can't tell any difference on my Hitachi 57F59 until the picture begins to macroblock, but then again my Hitachi only resolves 1280X1080, perhaps if I had a 1920X1080p display I might be able to grouse about the picture fuzzing up just before it drops out.

My 1080X1280 picture still has WOW and looks like I'm right there looking through a window even on some of the weaker channels on 129.

Tempest in a teapot!

Your right in saying that I'm right. Thanks.

I will add that SOME people will never see this difference. For example, you haven't. Some people won't see the difference that a calibration makes either. An estimated 6 million people think they're watching Hd and they don't even have their HDTV hooked up to a source.

In 2006, of all the HD customers I visited, only 4 where watching HD!!! Only four had the receiver resolution set above 480p!

What is interesting, and troubling, is that only one service call was about picture quality. All of those others who weren't even watching HD hadn't called in a picture quality issue, they were calling for all the other signal drop out and normal service call stuff.

So what's my point? Great HD isn't about who can't see it but about those who can. It is for those that can see the differences that need to know the full scope of potential "quality thieves" like calibration and signal strength.

And most important, this is for the people who are installing the greatest percentage of HDTV - satellite installers!

How big of a problem is this?

When 20% of installers have had this reported to them, it certainly isn't a rarity. It sounds like it is way more common than we'd like to believe.

How important is this?

How many times has "No, your signal won't change your picture quality.", "Digital is digital, all or nothing.", and similar statements been made on this forum alone?

Thread after thread, I have been criticized for trying to assist those that could possibly benefit from a signal increase and have been shot down by yourself and others.

I can't count how many times, but ask yourself again, how big of a problem this is...how many have been robbed of the HD quality they deserve because of wrong information?

How many techs using the excuse "signal doesn't matter as long as you have lock" do you think, avoided the trip to the roof , when they should have attempted to increase the signal?

Installers need to know. This is important to the understanding of the basic functioning of digital systems.

And, to this:

"He has read some articles about fuzzy vs crystal clear digital pictures and swallowed the hook, line and sinker."

I have been investigating what has been seen and reported in the field. I've been chasing what I was seeing and hearing. You probably know that, though, don't you -- You've posted (incorrectly) to most of my threads.
 
I KNEW there had to be an engineer here!

I think the attached figure (if it actually got attached:))might be helpful in this discussion. The figure explains:

The BER varies with the Signal-to-Noise ratio, and thus the signal strength, and any external noise. The graph uses "Eb/N0", which is similar to SNR, but is normalized to take out differences in modulation due to bandwidth (e.g. QPSK needs twice the SNR of BPSK, but because it uses 1/2 the bandwidth, you get twice the SNR automatically). Please note that the receive antenna gain is quite large, so terrestrial noise is generally attenuated a lot. That's not to say that their isn't any terrestrial interference--just that it's attenuated a lot by the receive antenna.

It takes a Eb/N0 or SNR range of about 7 dB to vary the BER from almost perfect (1E-12) to unwatachable (1E-3). By the way, I have heard that the BER for Quasi-Error-Free is about 1E-10.

The Viterbi or "outer" error correction (or FEC). reduces the signal range for that same range of BER to about 4.5 dB.

The Reed-Solomon or "inner" FEC, when combined with the Viterbi FEC, reduces the range for the SNR to about 1 dB.

That 1 dB is the "cliff" that everyone talks about. So the cliff isn't a "brick wall", but it is very, very steep.

One other piece of data that would be very helpful is the scaling of the signal strength meter in the receivers--if it's linear or logarithmic. I think it's probably logarithmic, with an expanded scale; perhaps a change of 10 is 1 dB. Given that information & the BER curves, and perhaps some information about how much the signal strength & noise vary, we can see how much margin is really needed to ensure that we are always above the QEF threshold.

One other point--the discussion about QEF assumes that all bit errors are created equal. If you look at how MPEG works, this isn't really the case. Some bits are fairly minor, such as a high-frequency element in a macroblock in the corner. Or perhaps the bit error is another program on the same transponder! Other bits are much more important, such as header information. But at a BER of 1E-10, you aren't going to see much in any case.

Bravo!
Where have you been all my HD life?
Thank you!!!
 
You did wrong assumptions:
- correction of wrong bits doing exactly restore those bits;
- non-corrected bits in packet will issued discard that packet
- video stream consist static and incremental frames ( see MPEG-2/H.264 standard)
- one packet is less then 200 bytes length, so I-frame usually 30...60 KB - do you math
- decompression algos works with a sequence I-B-P frames

Practically, this discussion without digging down into MPEG-2/4 algos wouldn't finish here.

In reality, the postulate about creating fuzzy,blurry, grainy, etc pictures is wrong.
Real result - macro blocking, green spots (as default color) instead of missing frames - not PIXELS (!) or full blackout.

Can we get some Forward Error Correction on your English? It's badly broken. :cool:
 

Users Who Are Viewing This Thread (Total: 0, Members: 0, Guests: 0)

Who Read This Thread (Total Members: 1)