Jump to content
Existing user? Sign In

Sign In



Sign Up

female Snuppa Wetting (somewhat) Restored


Recommended Posts

7 hours ago, TVGuy said:

Hey...

So, I appreciate your work in restoring classic videos, and would like to talk with you about your process.

In addition to HD Wetting, I have a mainstream business who's mission is to provide technology based solutions supporting broadcast, media, and streaming industries.  A large part of my business is converting old media formats and restoring them.

When dealing with many old video formats, we can divide the modernization of old videos into two distinct components.  The first component being restoration, the second being enhancement.

The problem with old video formats (NTSC, PAL, SECAM) is they were designed to produce a signal that would look good when displayed on a CRT display.  Hallmarks of these old video signals are interlacing, where the display would flash alternating fields back and forth to produce the image, instead of a single continuous frame, and using a luminance based signal with U and V axis phasing to produce color.  While this worked well with the way an electron gun drew a picture on a CRT display, modern displays are inherently progressive scan, drawing a full frame in its entirety, and with each pixel having a distinct RGB value.

In modernizing these old video formats, the first challenge we have is in properly converting the signal from an interlaced YUV signal to a progressive scan RGB signal.  Let us first just look at the de-interlacing component of the process, or the process of converting a frame made up of interlacing fields to a single continuous frame.  Typical processes for capturing old video signals and converting them to a progressive scan signal usually result in a significant loss of information, often noticeably impacting image quality.  This is why videos that you may recall watching on a CRT display, and not looking that bad, will look horrendous on a modern display.

The issue is that there is no simple, easy way to convert interlaced frames to progressive scan frames.  Here is why- The simplest thing to do would be simply to remove one of the interlaced fields, then double the lines of the remaining field.  This is probably the most common thing we see in de-interlacing, as it is the easiest to do, but it has significant draw backs.  By removing one of the fields, we immediately lose half of the visual information.  Instantly, we are cutting the overall resolution in half.  On top of that, we are reducing our apparent frame rate.  With these interlaced signals, because of the way a CRT would flash the alternating fields, the motion cadence we would observe on a native interlaced display would be double the frame rate.  So, in the case of an NTSC signal, at roughly 30 frames-per-second, we would actually be watching a motion cadence of 60Hz.  The de-interlacing process, however, would cut in half not just our overall resolution, but also result in a 30Hz motion cadence.

There are better ways to deal with de-interlacing, and a lot of different capture tools and video software are offering these methods.  Rather than delete one of the fields, they will use some kind of algorithm to blend the frames together.  This manages to maintain some of the resolution, but you still typically will lose half of the intended motion cadence.

To truly restore an old video signal so it can be viewed on a modern display with its intended resolution and motion cadence, the following most be done- Each alternating field needs to be separated into its own distinct frame.  The missing field from each frame should then be reconstructed using a high  quality interpolative algorithm.  This will maintain the original motion cadence of the signal,  all of the original visual information will be present, and there won't be any perceived intra-frame loss of resolution.  Unfortunately, this is not something you can do by just applying filters, as it involves re-timing and converting the original signal.

Then there is the matter of converting YUV to RGB.  Most people don't even bother with this, but the result will be colors that are noticeably off- This is why old video tapes typically have that particular look to their colors that reveal that they are from an old video.  However, color science was fairly well established by the 90's and the colors of these old videos weren't supposed to look like that.  You can approximate this with color correction after the video has been captures, but to truly bring back the video in the way it was intended there is a lot of math that is needed to convert between the two different color spaces.

Additionally, this color space conversion should be done in hardware, before the signal is digitized.  Most capture devices will only capture 8 bit video.  Once you start shifting the color space of that 8 bit video, to move to the correct color space, you are going to have to shift outside of that 8 bit range.  This means your corrected range will be closer to 5 or 6 bit video.  You end up losing color information that was in the original, but can't be maintained when converting color spaces in an 8 bit arena.  The hardware to do this is expensive, and an alternative would be to use a higher-end capture card that isn't limited to 8-bit capture.  This will give you a greater bit-depth to work on when converting your color space, resulting in more information being retained when targeting an 8-bit format delivery.

There are other elements to consider as well, like pixel aspect ratio, pixel sampling, and aspect ration conversion, and visual resolution theory as related to the Nyquist folding frequency of a pixel array in a certain color space.

-------------------------------------

So that is some background on restoration.  Of course what you are actually able to do will depend heavily on what you have to start with.  Not having an original tape, or starting with a video signal that has already been improperly captured and converted will severely limit your ability to create a true restoration of the video.

Once the video is restored to its intended color, motion, and resolution, you can then go on to enhance it it various ways, such as increasing its resolution, using modern color grading techniques, or in other ways.

Watching the Snuppa video you provided here, there are several things I notice right away.  One, is the video is at 24 frames per second.  Neither NTSC video or PAL video was natively 24 frames per second, so the video isn't being shown at its intended frame rate or motion cadence.  For NTSC we should be at 60Hz, or 50Hz if the source was PAL.  I also notice that high-contrast vertical lines get wavey at times, which is an artifact of de-interlacing by blending two frames together.  The black levels, and the lack of definition in the color of her shirt, also reveal that there was never a proper YUV to RGB conversion done on the color space.

Now, it looks like you did some work with filters.  It looks like there is quite a bit of noise suppression, but this noise suppression appears to be universal, which results in a lack of hair texture or cloth texture in her jeans.  I would suggest targeting the noise reduction filter you are using to specific colors, so as to remove visible noise from large solid color areas without loosing detail in things like her hair.  It also looks like you may have attempted some sort of uprezzing using an unsharpen mask, but it looks like your settings were maybe a little too aggressive, resulting in cleaner edges, but the loss of other visual details.

Of course, this might have been all you were capable of doing.  I don't know what your starting point was as far as a file, if you had access to an original tape, DVD, or if you were having to deal with a file  that was already de-interlaced and had the colors  baked into to the wrong color space.

Umm....yeah. Took the words right out of my mouth. Obviously. 

😁😅

Link to comment
13 hours ago, JMatthews1995 said:

Wow. Some impressive information there! Good read! 😄

 

7 hours ago, Hilbron said:

Umm....yeah. Took the words right out of my mouth. Obviously. 

😁😅

Sorry guys!  My college degree is actually in Motion Picture, Television, and Broadcast Technology.  My thesis was actually on video signal format conversion, digitization, and encoding.

Link to comment
On 8/7/2020 at 1:27 PM, TVGuy said:

Then there is the matter of converting YUV to RGB.  Most people don't even bother with this, but the result will be colors that are noticeably off- This is why old video tapes typically have that particular look to their colors that reveal that they are from an old video.  However, color science was fairly well established by the 90's and the colors of these old videos weren't supposed to look like that.  You can approximate this with color correction after the video has been captures, but to truly bring back the video in the way it was intended there is a lot of math that is needed to convert between the two different color spaces.

Additionally, this color space conversion should be done in hardware, before the signal is digitized.  Most capture devices will only capture 8 bit video.  Once you start shifting the color space of that 8 bit video, to move to the correct color space, you are going to have to shift outside of that 8 bit range.  This means your corrected range will be closer to 5 or 6 bit video.  You end up losing color information that was in the original, but can't be maintained when converting color spaces in an 8 bit arena.  The hardware to do this is expensive, and an alternative would be to use a higher-end capture card that isn't limited to 8-bit capture.  This will give you a greater bit-depth to work on when converting your color space, resulting in more information being retained when targeting an 8-bit format delivery.

As an engineer working with software-defined radio (SDR), doing the color space conversion in analog is no longer the optimum way to do it. Modern ADCs have come a long ways and even one that does 24MS/s at 16 bits - the WM8213 - costs less than $2 in bulk. Doing processing in analog is subject to noise and nonlinearity, plus fast opamps are not cheap.

If you're just trying to extract the video off a VHS tape (the most common consumer analog video storage format), it's a moot point when VHS has less than 8 effective bits of signal quality. Capture uncompressed or lossless compressed because disk space is cheap enough to allow that and you don't want compression artifacts between the raw frames and whatever algorithms you try to apply to improve it.

I think what has most promise in boosting video quality nowadays is deep learning. An example of that is DLSS, which is used to allow games to render at a lower resolution and then be scaled up in real time for display. An algorithm that does not need to run in real time and can look at future as well as past frames should be capable of even more impressive results.

Link to comment
7 hours ago, GoingGreen said:

doing the color space conversion in analog is no longer the optimum way to do it.

I didn't mean to suggest doing the color space conversion via analog equipment, for the very reasons you cited, only with dedicated hardware meant for that purpose.  For my business we use Teranex converters.

7 hours ago, GoingGreen said:

Capture uncompressed or lossless compressed because disk space is cheap enough to allow that and you don't want compression artifacts between the raw frames and whatever algorithms you try to apply to improve it.

I agree... Except that hardware capable of capturing an uncompressed video stream is expensive and not readily available at the consumer level.  Yes, if uncompressed capture of the signal could be achieved the video format conversion could all be done in software in a way that is lossless. 

7 hours ago, GoingGreen said:

If you're just trying to extract the video off a VHS tape (the most common consumer analog video storage format), it's a moot point when VHS has less than 8 effective bits of signal quality.

Many of the Snuppa videos, and other videos from that era, originated on MiniDV and were originally captured via firewire.  This is why in the early 2000's we suddenly had an explosion of this kind of content.  Despite being a digital format, MiniDV would still record an NTSC or PAL signal, complete with interlacing and in a YUV color space as the intended display medium was still a CRT television.

Also, I deal with a lot of VHS in my mainstream business.  I do not disagree in any way that VHS has less than 8 effective bits of signal quality... However, it has been my experience that to do the color space conversion you need to at least be working with an intermediate color space that gives you a greater bit depth.  Due to the way a lot of video editing software works, you are artificially limited to the bit depth that you originally captured at... At least without taking various steps and engaging in work arounds to utilize an intermediate color space that gives you more room.

7 hours ago, GoingGreen said:

I think what has most promise in boosting video quality nowadays is deep learning. An example of that is DLSS, which is used to allow games to render at a lower resolution and then be scaled up in real time for display. An algorithm that does not need to run in real time and can look at future as well as past frames should be capable of even more impressive results.

Yes... Absolutely.  In fact, with my mainstream business, we are working on training an in-house AI for uprezzing, stabalizing, and format conversion.  When it works, it is absolutely amazing.  We can take old, noisy, VHS footage and transform it into modern looking high-frame-rate 4K material.  Unfortunately, we aren't quite at the place where it is reliable yet. 

Link to comment
2 minutes ago, TVGuy said:

\I agree... Except that hardware capable of capturing an uncompressed video stream is expensive and not readily available at the consumer level.

For PAL/NTSC, those were readily available as PCI analog TV cards. The Linux wikis should have information what chips they use and from there find out how many bits the ADCs are. (And you do want to use Linux for that because good luck finding an analog TV card that works on Windows 10.) Analog TV is long dead so it shouldn't be too difficult to find one of those cards for next to nothing. Use the S-video input if possible, but composite is far more common on VHS decks.

For uncompressed HD captures, the hardware is indeed more expensive, albeit still somewhat affordable for the consumer, some are available for about $100. But as far as I'm aware, there are no common consumer storage formats for analog HD video, just some experimental extensions to the VHS format that have never managed widespread use.

Link to comment
1 hour ago, GoingGreen said:

For PAL/NTSC, those were readily available as PCI analog TV cards. The Linux wikis should have information what chips they use and from there find out how many bits the ADCs are. (And you do want to use Linux for that because good luck finding an analog TV card that works on Windows 10.) Analog TV is long dead so it shouldn't be too difficult to find one of those cards for next to nothing. Use the S-video input if possible, but composite is far more common on VHS decks.

For uncompressed HD captures, the hardware is indeed more expensive, albeit still somewhat affordable for the consumer, some are available for about $100. But as far as I'm aware, there are no common consumer storage formats for analog HD video, just some experimental extensions to the VHS format that have never managed widespread use.

I wasn't aware that any of the old PCI analog capture cards were truly uncompressed.  I was working in broadcast at the time such cards were in common use, and it was my understanding that the time very drives could handle the sustained write rates involved with uncompressed video capture, even in the SD realm.  Processing was also slow enough that most of the capture cards compressed to some sort of codec via hardware at the point of capture.  The standard 25 Megabit DV codec was the most common.

For cards that offer true, uncompressed, analog capture, I'm not finding much in the way of consumer grade hardware that offers this.  There are some products from Matrox, but these are firmly in the realm of professional broadcast projects.  At $595 the Blackmagic Decklink Studio 4K is the most affordable card I can find right now that offers true uncompressed analog signal capture.

In regards to using S-video on VHS decks, I could be wrong, but it was my understanding that this wouldn't give you any benefit.  This is because with VHS you are dealing with a single carrier for luminance and a modulated sub-carrier for chrominance.  The color signal consists of harmonics that are between the harmonics of the baseband luma signal instead of both being in continuous frequency bands along side each other.  The luma and chroma signals can be separated into separate Y and UV channels via comb filter, to give you an S-video signal, but this is only adding an extra analog processing phase and isn't giving you anything extra when dealing with VHS.  S-VHS and Hi8 video have much greater luma signal bandwidth, and thus benefit from s-video connections which transport luma and chroma separately.  For regular VHS, however, the signal on the tape itself has combined luma and chroma information so there is no benefit to separating them before capture.

At my facility, when dealing with Analog video, I first convert the signal to SDI.  Depending on the video source, I'll use either composite video, S-video, or component video analog output into a video time-base-corrector.  The TBC generates a new timing signal, as often the video's control track is the most damaged in old formats as it sits at the very each of the tape.  The TBC also lets me correct any levels that might have drifted, as well as address any tape wobble or distortion, and maintain consistent black and consistent white levels.  The analog signal out of the TBC then goes to an analog/digital converter for conversion to SDI.  In my case, I am using Blackmagic's Analog/Digital mini-converters.  The signal is now digital, but not processed in any other way- It is still interlaced and in a YUV colorspace.  That SDI signal is then routed through a Teranex standards converter which I use to convert the signal from interlaced to progressive scan while maintaining the original 60Hz or 50Hz motion cadence, convert the color space to RGB, and remove the 7.5 IRE analog setup level.  That resulting uncompressed signal is then captured and ready to be compressed according to its intended delivery. 

Link to comment
2 hours ago, TVGuy said:

I wasn't aware that any of the old PCI analog capture cards were truly uncompressed.  I was working in broadcast at the time such cards were in common use, and it was my understanding that the time very drives could handle the sustained write rates involved with uncompressed video capture, even in the SD realm.  Processing was also slow enough that most of the capture cards compressed to some sort of codec via hardware at the point of capture.  The standard 25 Megabit DV codec was the most common.

For cards that offer true, uncompressed, analog capture, I'm not finding much in the way of consumer grade hardware that offers this.  There are some products from Matrox, but these are firmly in the realm of professional broadcast projects.  At $595 the Blackmagic Decklink Studio 4K is the most affordable card I can find right now that offers true uncompressed analog signal capture.

In regards to using S-video on VHS decks, I could be wrong, but it was my understanding that this wouldn't give you any benefit.  This is because with VHS you are dealing with a single carrier for luminance and a modulated sub-carrier for chrominance.  The color signal consists of harmonics that are between the harmonics of the baseband luma signal instead of both being in continuous frequency bands along side each other.  The luma and chroma signals can be separated into separate Y and UV channels via comb filter, to give you an S-video signal, but this is only adding an extra analog processing phase and isn't giving you anything extra when dealing with VHS.  S-VHS and Hi8 video have much greater luma signal bandwidth, and thus benefit from s-video connections which transport luma and chroma separately.  For regular VHS, however, the signal on the tape itself has combined luma and chroma information so there is no benefit to separating them before capture.

Many of the earlier and higher end cards did have onboard hardware compression, but later on, TV cards increasingly moved over to software encoding since it made more sense to spend what would have been spent on a hardware encoder on a faster CPU instead. Most of the classic "cx88" cards were of that type.

I decided to do a little research on how VHS worked and it turns out that it doesn't just record the composite signal to tape directly, but rather it separates the Y and C signals because the color subcarrier is at too high a frequency to reliably record on standard tape. So to play back a tape, the signal from the tape has to be split into the bands that contain the Y and C signals, decode them, and then recombine into a composite signal. S-video skips the recombine step for a little improvement in quality.

https://en.wikipedia.org/wiki/Heterodyne#Analog_videotape_recording

Someone really dedicated toward the task of archiving video from VHS could pick off the signal directly from the head amps and process them using a fairly cheap but powerful DSP board like the Red Pitaya, bypassing the 80s/90s decode electronics. It would be quite the challenge to program, however!

There's quite a bit of applications for DSP in video archival, even in the purely digital realm. Cross correlation could be used to find duplicate content that's not identical files. Some advanced DSP could theoretically try to figure out if two different files of the same content has one derived from another (and thus inferior due to stacking of compression artifacts) or if they're both encoded from a common master that's no longer available, then possibly try to merge the two to get a better copy in the same way a MIMO radio receiver can get a better signal even if the source is not MIMO.

Link to comment
On 8/8/2020 at 2:27 AM, TVGuy said:

Hey...

So, I appreciate your work in restoring classic videos, and would like to talk with you about your process.

Hey TVGuy,

Thanks for your detailed response. I'm also happy to see this thread took off!

To everyone asking what my technique in restoring this video was, well, I'm not a professional, just an enthusiast. I don't have a designed method for it or anything like that, and I go by the eye to see what I like. Some software I used here was StaxRip for encoding, some AviSynth scripts, the madVR renderer, playback from that was captured using lossless compression in OBS. I may have used Interframe or a de-interlacing frame doubler at one point to try and deal with interlacing artifacts, not too successfully (wobbly lines). I then used After Effects for further upscaling and to add some final touches like color correction and grain. A little bit of grain can go a long way.

As I say I'm no professional when it comes to video, my forte is film photography -- digital. I think the main thing is to go by what suits your eye, but thanks to TVGuy's advice I'll definitely be doing a lot more theory based research in the future.

Link to comment
14 hours ago, GoingGreen said:

Many of the earlier and higher end cards did have onboard hardware compression, but later on, TV cards increasingly moved over to software encoding since it made more sense to spend what would have been spent on a hardware encoder on a faster CPU instead. Most of the classic "cx88" cards were of that type.

I must have missed that era of consumer capture cards.  By the time I started working in broadcasting we were using high-end Matrox cards that were uncompressed, but also provided RS422 deck control.  Looking at consumer capture hardware now, it almost all seems to be USB based, which I assume must be doing some kind of compression as it seems like there would be difficulty in transporting an uninterrupted uncompressed video stream via USB.

14 hours ago, GoingGreen said:

Someone really dedicated toward the task of archiving video from VHS could pick off the signal directly from the head amps and process them using a fairly cheap but powerful DSP board like the Red Pitaya, bypassing the 80s/90s decode electronics. It would be quite the challenge to program, however!

Funny you should mention this.  There was a company, I think out of Indiana, that was producing specialty DSP boards just for this purpose.  In broadcast style tape decks, all of the electronics are contained on easily swap-able cards.  This way it would be easy for a station's in-house engineers to quickly swap out a bad TBC or control board and put the deck right back in service.

A few years ago I became aware of this company that was producing modern DSP boards for old style Panasonic and Sony broadcast decks.  Not just VHS, but 3/4 inch and BetaCam as well.  The purpose was to get the best possibly quality playback for archival purposes.

I have a couple old Panasonic broadcast style S-VHS decks, so I tried out one of their DSP boards.  It worked fine, but I honestly couldn't perceive any quality difference when capturing from VHS or S-VHS.  The rest of my workflow was the same, going through the Teranex, and their appeared to be no difference in quality.  Even looking at the signal on my oscilloscope, it looked completely identical.  So, I'm really not sure how much there was to be gained in doing this.

14 hours ago, GoingGreen said:

Cross correlation could be used to find duplicate content that's not identical files. Some advanced DSP could theoretically try to figure out if two different files of the same content has one derived from another (and thus inferior due to stacking of compression artifacts) or if they're both encoded from a common master that's no longer available, then possibly try to merge the two to get a better copy in the same way a MIMO radio receiver can get a better signal even if the source is not MIMO.

This would be great, I am sure I am storing many terabytes of duplicated content.  Unfortunately, when it comes to the business of video archival, many clients have very strict requirements regarding their data handling.  A lot of it is just paranoia, and there is no real justifiable reason for it.  But typical agreements require their content be handled in such a way that it is completely isolated from any other customer's content.  Every device in the capture process must be air-gapped and access must be limited.  One of the reasons I do so much processing with devices like the Teranex, instead of in software, is to satisfy these data handling requirements that restrict what devices their precious 1990's corporate training videos can touch.  Computers, with built in memory, could possibly store some latent copy somewhere.

11 hours ago, snapshot said:

To everyone asking what my technique in restoring this video was, well, I'm not a professional, just an enthusiast. I don't have a designed method for it or anything like that, and I go by the eye to see what I like. Some software I used here was StaxRip for encoding, some AviSynth scripts, the madVR renderer, playback from that was captured using lossless compression in OBS. I may have used Interframe or a de-interlacing frame doubler at one point to try and deal with interlacing artifacts, not too successfully (wobbly lines). I then used After Effects for further upscaling and to add some final touches like color correction and grain. A little bit of grain can go a long way.

So you are capturing playback from tape or some other kind of analog source then?  What are you using for your capture hardware?

If your capture hardware is compatible, might I suggest using VirtualDub to capture with?  It is an old program now, but open source, so it doesn't cost anything.  The benefit to using it is that it has built in tools for properly handling the frame rate conversion and de-interlacing, as well as correct color space conversion.  This will let you starts with a video that has all the original visual information, motion cadence, and correct colors before you start working on enhancing it.

Link to comment
1 hour ago, Cherylicious said:

 

So is there no LCD that can replicate the properties of a CRT?

 

An LCD itself draws the image in an entirely different way than a CRT.  It is possible to have processing electronics connected to an LCD display that can adapt an NTSC signal, this is how modern televisions sometimes have composite and s-video inputs.  But NTSC, PAL, and SECAM were all designed to take advantage of the way a CRT draws the picture on the screen, with an electron beam steered by powerful magnetic fields scanning a raster pattern.  LCDs address individual pixels in an array.

Link to comment
  • 2 months later...
On 8/6/2020 at 11:46 PM, snapshot said:

 

I hope to work on more classics soon in an attempt to pay homage to the early days of omorashi related content! Let me know what you think, any tips and suggestions regarding process most welcome. Also, any classic clips out there you'd like to see restored?

Snuppa was one of the best. Why'd she stop making videos?

Link to comment
On 11/10/2020 at 6:47 AM, wilbob76 said:

Snuppa was one of the best. Why'd she stop making videos?

Her partner, who went on to make BlueWetting videos with a girl called Bella, said this:

"She doesn't want to do it anymore". 

And that's the last that anyone ever heard of her, in our particular corner of the World. 

I wish her well: but I would guess that she's graduated, got a job, got a family in school... And is hoping that no-one in her life today ever sees her in a 'Snuppa' video. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...