Friday, June 14, 2019

Raspberry PI & Kodi: Point-to-Point Video Live Streaming via FFMPEG

Many of us recall the joy of 1st week of grade school -- new fat pencils & erasers, rulers, different papers & the odd smell of 16 new Crayola crayons!  Unspoken in grade school? You were learning to color / draw / create / capture ideas & abstractions to paper.

In the world of the Internet -- and video capable smartphones, Video is the new "paper" -- Learning to color / draw / create / capture ideas & abstractions on this new "moving paper" is a necessary & essential skill.

Background: Recently this author has been learning to setup live streaming for tutoring, home schooling, "live scene" broadcast, big screen TV sharing, remote teaching, video conferencing and related.  Rather than 10,000 words, perhaps a diagram of "Studio" of video sources and destinations for minimal setup:


Figure 1: Basic Studio Setup
(click for larger)

Challenge?  Under $400 USD budget.  Equipment: (a) Utilize 8+ year old $150 laptops and low cost, $35 WebCam, $45  Raspberry Pi platforms.  (b) Transmit video / audio to Kodi media center for "big screen" TV sharing with classrooms and assembled groups. (c) Avoid expensive or proprietary solutions with monthly fees, (d) Finally, recast & stream the video to "live" cloud services like Twitch or YouTube for tutoring & teaching anywhere on Internet.

Part #1: Test video sources for end-to-end network function checks.  Why NOT just use live video sources? Just stream a webcam?  Many small studios use "just a webcam" to test end-to-end -- AND the webcam ADDS EXTRA COMPLEXITY that can fail. A fail resistant video source is required.  If a test video signal can succeed in traversing the network connections, the odds increase that other video streams can and will work.  Slide shows, rolling video, live webcams, etc.

History: This author worked at a PBS TV station in early 1980s. Every TV station & studio has stable, ALWAYS AVAILABLE "video signal generators" (colorbars) and "Station ID" slides (test cards) -- for routine video feed testing / switching -- and often for "fail over" -- when a "live scene" went sour -- or live camera broke -- or when scheduled satellite video feed dropped out -- or the pre-recorded program on Video Tape failed or became "too noisy" for tolerable viewing.  

"Home school" TV studios, online classrooms & web conferencing need similar stable sources -- often to provide stable startup video feed (sign on) -- as new viewers join at random times -- and to uniquely Identify the live feed as "the right one." ("Hey, if you see Sunflower icon with countdown, you have found kickstart of our live video feed!").
  
Very simple & robust video source & kickstart:  Use FFMPEG to send color bars & 440 Hz ticking tone to laptop or BigScreen TV.  This will  test "in house" network connections (i.e. 'Is our LAN is "passing" the live video?').   Referencing Figure-1 above -- and from the command line on the Raspberry Pi B+ (LAN host IP 192.168.1.66) -- running a recent version of Debian -- issue this command: 

/usr/bin/ffmpeg -re \
  -f lavfi -i testsrc=size=288x160:rate=10 \
  -f lavfi -i "sine=frequency=440:sample_rate=22050:beep_factor=4" \
  -f mpegts udp://192.168.1.71:1234

To receive the transmitted test video, on the old laptop (LAN host IP 192.168.1.71) -- running some recent version of Linux -- from the command line issue this command:

/usr/bin/ffplay udp://127.0.0.1:1234
# -- or ---
/usr/bin/mpv udp://127.0.0.1:1234

In 5-30 seconds, the color bars and 440 Hz tone should pop up on the laptop screen as detailed in this live test video.  There will be error & caution messages from FFPLAY or MPV -- this is normal -- as the video stream format (mpegts) sends full index frames only intermittently: 


Successful local network 'broadcast' test 
of color bars and 440 tone.

Part #2: Sending test signal to media player Kodi & BigScreen TV -- host IP 192.168.1.80 in Figure-1.  Using a plain text editor (Windoze Notepad, VIM, Nano etc) -- create a "STRM" text file named "play-me.strm" with this single line: 

udp://127.0.0.1:1234
 Save this file on your Kodi media player is some easy to locate folder -- for example "/storage/videos/stream_play" -- and -- from the Raspberry Pi (host 192.168.1.66 in Figure-1) -- again issue the FFMPEG command -- but with the receiving host changed from old laptop (host 192.168.1.71) to new target host Kodi (IP 192.168.1.80) -- only the "71" changes to "80" to redirect the video test stream to a new target:

/usr/bin/ffmpeg -re \
  -f lavfi -i testsrc=size=288x160:rate=10 \
  -f lavfi -i "sine=frequency=440:sample_rate=22050:beep_factor=4" \
  -f mpegts udp://192.168.1.71:1234

Receiving video signal on Kodi / BigScreen TV: Using the graphical interface on Kodi -- navigate to the folder "/storage/video/stream_play" -- and select file "play-me.strm" -- Kodi will open the STRM file -- and a rotating circle will manifest -- indicating Kodi is attempting to "read" and "sync" on the video stream.  This may require 10-50 seconds to succeed.

Key Caveat: IP host numbers will vary in your studio setup.  The KEY IDEA? The test video signal can be "switched" and re-targeted in your studio -- from sending to old laptop -- to Big Screen TV -- or out to live stream servers in the Internet cloud.  And if the test video signal can be re-targeted -- any slide sequence video or webcam feed can be re-targeted.

Up Next: How to live stream video slides & audio via FFMPEG.

Thursday, May 10, 2018

Data Lives Forever, Systems Do Not for Remote Sensing & GIS Mapping

In his 2005 book, "Data Crunching":


Author Greg Wilson tossed out this thought -- a useful gold nugget -- and I am paraphrasing from memory:

"[Clients, Customers and] Users care about their data -- and really do not care about processing systems ... Data can live forever ... systems do not ... Just ask Russians doing genealogy on now defunct Soviet era computers."

This is very important idea to remember when mining & extracting "value" from data.  Here is a "old" example -- old in terms of the Internet's young age.

Background: Nov 1994 web data?  Possible to bring digital data back to to life -- from 1994? And for use in current events compare & contrast?  As a result of May 2018 news -- and reminders from past work associates -- regarding the most recent Lava fissure eruption along the east rift zone of Kilauea volcano of Hawaii -- some brain triggers sparked. 


Lava flow Pix from Wikipedia

 In the deep corners of my mind -- I recalled some quick look, site analog GIS-mapping -- a fast compare and contrast -- of the Kilauea site -- to some remote risk sites in Kamchatka Peninsula of Russia.  The 1994 question was roughly this: Can we  use JPL processed Synthetic Aperture Radar (SAR) -- specifically Space Shuttle Interferometric Radar (InSAR) data -- to remotely explore in Kamchatka? 


InSAR basic flight-orbit line & Radar
Data collection ground swath

Goal? Extract terrain displacements & subsurface lava movement around the Tolbachik volcano complex?  


Google Terrain map detail:
Fissure eruption analog site
south of Tolbachik volcano.
(click for larger)

Motivation? The Tolbachik volcanic site complex showed potential -- at providing risk estimates -- comparable to Volcanic Hazards at the proposed nuclear waste respository at Yucca Mountain Nevada, USA.  And survey field work for places like Tolbachik was difficult & expensive.  Maybe there was "more cost effective" satellite data to the rescue?  At least for the preliminary studies?

Now the tough part for today: Did any GIS-mapping data survive from the 1994 archives?

Surprise!  Yes.  In a 1995 TAR archive file -- found on a 1999 ISO 9660 CDROM.  And the CDROM remained 98% readable -- with just a few errors.  The key Kilauea quick look "test" files were in a strange name folder "Xmosaic".  Huh? Let us recall -- in 1994 -- the world wide web was very young -- and one of the few viable web browsers was NCSA Mosaic -- and Mosaic would "drop" downloaded files to a folder called "XMosaic".  


1995 Logo for NCSA Mosaic Web Browser

And to make data recovery more tricky -- my Mosaic version was running and storing data on a long since dead desktop 50 mhz Sun SparcStation -- running a variation of *NIX called SunOS.

Fortunately the ISO 9660 standard has "survived" much better than tricky Kodak Photo CD

Reuse in 2018 -- find & extract key files -- and "upscale" the data value for use compare & contrast to today's events: 

Fast result?  Raw pix first -- with Oct 1994 UNIX timestamp:


Found Oct 1994 file in GIF format
See original HTML caption below
(slightly updated version at JPL)

2018 Quick ReMapping and GeoRegistered KML and ground overlay image drape: Data value "upscale" of this 1994 image -- into a "modern" GIS-mapping tools like Google Earth -- screen shot preview of image drape / image overlay setting: 


Google Earth Screenshot of Oct 1994 
InSAR displacement via KML ground overlay

Ready to Go KML-KMZ file?  For those using Google Earth in your GIS-mapping work, the image overlay KMZ can be fetched from this link..

UPSHOT?  Lesson?  Open standard file formats are our friend.  Unlike the Kodak Photo CD example, these files were stored in bit-byte formats -- and on media -- that had a recovery path -- far beyond the lifespan of the original hardware and software.  Begin with the end in mind -- please think 20+ years ahead -- to aid data value extractions and future re-combinations.

Ref:
[1] Original HTML caption for Oct 1994 InSAR radar image from JPL website -- and the updated version:

354-5011
PHOTO CAPTION P-44753
October 10, 1994
Kilauea, Hawaii
Change Map 

This is a deformation map of the south flank of Kilauea volcano on the big island of Hawaii, centered at 19.5 degrees north latitude and 155.25 degrees west longitude. The map was created by combining interferometric radar data -- that is data acquired on different passes of the space shuttle which are then overlayed to obtain elevation information -- acquired by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar during its first flight in April 1994 and its second flight in October 1994. The area shown is approximately 40 kilometers by 80 kilometers (25 miles by 50 miles). North is toward the upper left of the image. The colors indicate the displacement of the surface in the direction that the radar instrument was pointed (toward the right of the image) in the six months between images. The analysis of ground movement is preliminary, but appears consistent with the motions detected by the Global Positioning System ground receivers that have been used over the past five years. The south flank of the Kilauea volcano is among the most rapidly deforming terrains on Earth. Several regions show motions over the six-month time period. Most obvious is at the base of Hilina Pali, where 10 centimeters (4 inches) or more of crustal deformation can be seen in a concentrated area near the coastline. On a more localized scale, the currently active Pu’u O’o summit also shows about 10 centimeters (4 inches) of change near the vent area. Finally, there are indications of additional movement along the upper southwest rift zone, just below the Kilauea caldera in the image. Deformation of the south flank is believed to be the result of movements along faults deep beneath the surface of the volcano, as well as injections of magma, or molten rock, into the volcano’s "plumbing" system. Detection of ground motions from space has proven to be a unique capability of imaging radar technology. Scientists hope to use deformation data acquired by SIR-C/X-SAR and future imaging radar missions to help in better understanding the processes responsible for volcanic eruptions and earthquakes.


Spaceborne Imaging Radar-C and X-band Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA’s Mission to Planet Earth. The radars illuminate Earth with microwaves, allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L- band (24 cm), C-band (6 cm) and X-band (3 cm). The multi- frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data, complemented by aircraft and ground studies, will give scientists clearer insights into those environmental changes which are caused by nature and those changes which are induced by human activity. SIR-C was developed by NASA’s Jet Propulsion Laboratory. X-SAR was developed by the Dornier and Alenia Spazio companies for the German space agency, Deutsche Agentur fuer Raumfahrtangelegenheiten (DARA), and the Italian space agency, Agenzia Spaziale Italiana (ASI), with the Deutsche Forschungsanstalt fuer Luft und Raumfahrt e.V.(DLR), the major partner in science, operations and data processing of X-SAR. #####





Thursday, April 26, 2018

BigData Analytics: Color Vision in a Cat Whisker World

Seeing with New Digital Eyes

"He who fights with monsters should look [into] ... himself ... if you gaze long into [the] abyss, the abyss also gazes into you." -- Friedrich Nietzsche (1844-1900), from Beyond Good and Evil -- Aphorism 146

Natural Analog Question: For jungle predators, what is the advantage of seeing in color -- when all your prey only see in black & white -- or maybe only have cat whiskers and no eyes?  

Now, thinking in metaphorical terms: What are the computational insight advantages of BigData Analytics -- and social media data curation & Data Mining -- over everyone else? Are you and your electronic Doppelganger food for BigData predators?  How can you detect stalkers?  Are there taste testing teeth marks on your electronic doppleganger?

Ross Anderson of Cambridge University offered up a form of these questions in his 2017 YouTube interview -- entitled "The Threat." Mr Anderson illustrated 30+ years of trends for potentially predatory BigData aggregation and exploitation -- and Mr Anderson's odd journey thru "technical gettos" -- political gaming of technology -- and vetting the social/ psychological/ economic landscape of "digital life". 

One glaring exploitable aspect in a world gone Gaga with "free" social media? Economic & Political manipulation of "digitally curated" social media relationships.  Too scary to believe? Hear Ross Anderson's perspectives:


2017: "The Treat" -- 30+ year trends in
in computing, privacy, social adaptions (or
not) to BigData Surveillance Capitalism

Deep Nerd Hints: You can download audio only of this interview -- and the audio contains 98% of the key info -- via the command line tool YouTube-DL.  See Nerd Bonus addition at end of this posting.

In 2013 -- long before the Facebook and Cambridge Analytica controversy arose, Maxine Waters hinted at the ice berg under the digital waters -- for naive sailors of the digital seas:


2013 Maxine Waters perspectives
on Database Nation & politics.

Uncharted Ocean? No. Surprise History and 17th Century BigData: Just before the 1789 French revolution, the King's secret police were "monitoring" the "news" of Paris.  How?  In a time when most folks were illiterate?  Street vendors would "sing the news" -- to customers and each other.  News traveled by song -- and by word-of-mouth -- and was a mixture of true events and "fake news" (salacious gossip).

The King's secret police would write up a Zeitgeist summary of this street "news" -- apparently for "actionable" intelligence.  Effective?  17th century BigData did not save the French King from the revolution. And illustrates how BigData machinations of economics & politics is not new. One amazing character and secret police monitor was Jean-Charles-Pierre Lenoir -- escaped the worst of the French revolution -- and left valuable historical insights into the approach and unfolding of the French Revolution -- from "loser side".

Not paranoid? Yet?  Check for teeth marks on your electronic doppleganger.  Some are as subtle as search engine stovepiping of your query results. Many are changes in online Ads boiling up in web page visits.  Tone and flavor of bulk mail is another indicator. And more regular & routine ID theft challenges. And more ID / authentication friction at post office, banks, credit-debit card use, good & bad police interaction.  Surveillance capitalism data trafficking -- coupled w/ rouge BigGov actors -- begets more ID theft, which begets more surveillance friction, which beget more ID theft, etc etc.

Upshot?  Action?  Lacking deep resources to thwart BigData with your own BigOther magic?  If you must be on the internet, then first know this: We cannot stop the data gathering -- only pollute it.  Or at least blow a smoke screen.  

Other Options?

(1) Grow some color vision -- learn to detect teeth marks on your electronic Doppleganger.  (2) Learn how to use some digital camouflage -- and -- metaphorically speaking -- learn to use the old Star Trek technique of "rotating your shield frequency" -- to throw off BigData alien tractor beams molesting your electronic doppleganger: Create "spare" online profiles.  (3) Use app browsers that thwart BigData tracking -- like Firefox Klar -- or on WinXX machines routinely flush Google Chrome browser cache (amazing how your search results change!) -- avoid "logging in" -- and (4) Boot and cruise internet with "stock" live systems.  

Welcome to a Brave New World, Mr Savage (;))!

Nerd Bonus & Hints: How to fetch audio only track from YouTube videos:

From your *NIX command line: 

# =============================================
# --- Fetch audio track from YouTube URL
# --- install YouTube-DL if not already onboard
sudo curl -L \
  https://yt-dl.org/downloads/latest/youtube-dl \
  -o /usr/local/bin/youtube-dl 
sudo chmod a+rx /usr/local/bin/youtube-dl 

#
# --- Query YT for stored stream containers,
# --- "shopping list" output results below.
# --- Note selection of container 171 for audio
/usr/local/bin/youtube-dl -F rSw0fDCSfX8

#
# --- Fetch audio only track:
/usr/local/bin/youtube-dl -f 171 rSw0fDCSfX8

[youtube] rSw0fDCSfX8: Downloading webpage
[youtube] rSw0fDCSfX8: Downloading video info webpage
[youtube] rSw0fDCSfX8: Extracting video information
[download] Destination: The Threat by Ross Anderson-rSw0fDCSfX8.webm
[download] 100% of 26.35MiB in 00:36


# ===============================================
# --- Typical output: Query YouTube for shopping
# --- list of video container IDs and digital 
# --- formats -- for help, use "-h" with youtube-dl
/usr/local/bin/youtube-dl -F rSw0fDCSfX8

[youtube] rSw0fDCSfX8: Downloading webpage
[youtube] rSw0fDCSfX8: Downloading video info webpage
[youtube] rSw0fDCSfX8: Extracting video information
[info] Available formats for rSw0fDCSfX8:
format code  extension  resolution note
249          webm       audio only DASH audio   52k , opus @ 50k, 15.80MiB
250          webm       audio only DASH audio   72k , opus @ 70k, 19.07MiB
171          webm       audio only DASH audio   89k , vorbis@128k, 26.35MiB
251          webm       audio only DASH audio  129k , opus @160k, 34.69MiB
140          m4a        audio only DASH audio  130k , m4a_dash container, mp4a.40.2@128k, 39.80MiB
278          webm       248x144    144p  105k , webm container, vp9, 30fps, video only, 29.10MiB
160          mp4        248x144    144p  107k , avc1.4d400c, 30fps, video only, 15.71MiB
242          webm       414x240    240p  242k , vp9, 30fps, video only, 49.02MiB
133          mp4        414x240    240p  258k , avc1.4d400d, 30fps, video only, 45.79MiB
243          webm       622x360    360p  459k , vp9, 30fps, video only, 96.26MiB
134          mp4        622x360    360p  616k , avc1.4d401e, 30fps, video only, 70.79MiB
17           3gp        176x144    small , mp4v.20.3, mp4a.40.2@ 24k
36           3gp        320x186    small , mp4v.20.3, mp4a.40.2
18           mp4        622x360    medium , avc1.42001E, mp4a.40.2@ 96k

43           webm       640x360    medium , vp8.0, vorbis@128k (best)



Wednesday, April 25, 2018

Unlimited Soothing Surf & Stimulating Thinking w/ Brownian Noise

Challenge: Fighting noise with noise?  How to source unlimited soothing sounds to aid heavy thinking -- and block out highway noise?  And avoid use of privacy invasive apps / websites -- and run on "old" hardware?  

First a quick solution example -- and then second -- detailed background discussion.  

On almost any *NIX system -- using SoX -- the following commands will generate "Surf Noise" -- sound that mimics ocean waves breaking & retreating on a beach -- much more pleasing than just un-modulated stream of noise:

# =======================================================
# -- Option #1: Using Sound Exchange (SoX) tools 
# -- Unlimited "real time" play with surf waves (brown noise)  
# -- with period 8 secs (0.125 Hz) and 10% zero offset
/usr/bin/play -n synth brownnoise mix synth sine amod 0.125 10 


# --- Or write 40 secs audio to a WAV file at CD quality of 44100: 
/usr/bin/sox -r 44100 \
             -n surf_out.wav synth 40 brownnoise  \
             mix synth sine amod 0.125 10 

Hear & see "surf_out.wav" -- via YouTube -- video rendering generated via FFMPEG over piano keyboard scale (showcqt filter):  
40 secs of brown noise modulated by 
"ocean waves" with 8 sec period

Alternate tool? A fun visual frequency spectrum -- generated using FFMPEG / FFPLAY -- and synthetic input "device" anoisesr  -- and Tremolo modulated -- to simulate beach waves:

# =====================================================
# -- Generate Ocean Surf wave noise & spectrum display 
# -- Hints from obscure FFMPEG development notes:
#  http://k.ylo.ph/2015/11/30/endless-seashore.html 
#  
# -- for audio only, set "-showmode 0"


/usr/bin/ffplay -f lavfi -showmode 2 \
  -i 'anoisesrc=d=40:color=brown:r=44100:a=0.5' \
  -af tremolo=f=0.125:d=0.9 

# --- save to file with ffmpeg 

/usr/bin/ffmpeg -y -f lavfi \
       -i 'anoisesrc=d=40:color=brown:r=44100:a=0.5' \
       -af tremolo=f=0.2:d=0.9  \
        surf_out.wav

BACKGROUND:  Why these examples? Why unlimited ocean wave noise?  Sourced for work & study? 

During undergraduate school, I discovered that background music was soothing help -- while doing complex math & physics homework.  And subjectively seemed to aid problem solving. There is evidence appropriate noise levels help ADHD children -- and according the Wall St Journal, a little noise help normal folks.  See also end of script notes.

Beyond math, for this author, when writing was required -- different brain circuits were involved -- and music -- especially if there were lyrics.  The lyrics mangled the word smithing process.  Use of noisy fans -- or better -- noise from rain / thunderstorms -- provided brain stimulation -- minus the cross wiring of thoughts from lyrics. 

Since childhood days, this author has often used small electric fans or noisy space heaters -- to generate a background of "pink noise" -- and help drown out the spurious urban & suburban sounds.  My spouse regularly uses fans to aid sleep. Using noise sources to aid sleep is so common many commercial solutions exist.


One possible commercial noise
machine (from wikipedia pixs)

Natural Sources: This author enjoys ambient audio recording of passing thunderstorms -- for later soothing playback during long dry spells -- and to provide low level brain stimulation for mentally concentrated work -- or sleep assistance.  One of the down sides of recording in suburban environments is persistent low frequency traffic rumble -- mostly wheel intermod rumble under 30 Hz.  Depending upon atmospheric conditions, this low freq rumble can persist for many miles from traffic sources -- and can be difficult to filter out -- and retain the pleasing deep thunder boom harmonics.

Synthetic Solution?  There are herds smartphone apps --  interesting websites -- and many YouTube videos -- that provide soothing "white/pink" background noise -- and help drown out traffic / urban noise pollution.  Each of these have their up & down sides.  Enter a low cost option: Most laptops -- even 6-8 year old laptops -- have excess CPU capacity to generate soothing ocean wave noise.  Why not give new life to an old laptop -- and avoid buying a white noise machine?

UPSHOT? FOSS tools like SoX and FFMPEG / FFPLAY -- and from experiments illustrated here -- great for generating soothing background "surf" or ocean wave noise.  To paraphrase an old 1960s TV show: Your mission, Jim -- should you choose to accept it -- is to give these tools a shot -- you will be surprised at their power and flexibility -- beyond "trivial" synthetic surf sounds.  As always, should your computer be caught or killed, my secretary will dis-avow any knowledge of your actions.  Good luck Jim!

Wednesday, March 21, 2018

Creating Fun & Quick VFX-SFX w/ FFMPEG: Alien Interference Video

Motivation: Video is the "new" paper in our digital world.  Sometime in year 2007, more than 50% of the Earth's population owned a cell phone -- and in 2018 some 36% have a "smart camera phone" -- able to play video -- and growing.

This author had many clients & associates who needed / wanted video to help promote their product or service.  And most recently, a need to demonstrate -- "tease" -- creation & editing of a humorous short video.

First some nerd & creative details making a "feasibility" & "tease" video -- and Second, hit some key management lesson learned from making a "simple" video ("Hey, what's the big deal -- my 5 year old did that on his iPad?" -- Yes, he did.  Being 5 years old is a key factor: At this early age, the mind is free of too many life assumptions).

Project Challenge: Quick & simple generation of video & audio special effects?  A humorous Alien takeover of video signal?  And less ominous and spooky than 1963 Outer Limits intro roll -- "We are controlling transmission." (see 60 sec intro video)

More specific video creation challenge: Create & overlay TV Snow with "only" a Linux-Win32/64 laptop? How does a video creator-editor add TV Noise (snow) to a short & fun promo video clip -- without resorting to some complex Rube Goldberg mix of an A/D converter & digital recording from some noisy analog source?  In other words, how to avoid cabling together a herd of odd & old video boxes to obtain "junk" video for VFX-SFX (special effects)?

Quick Answer:  Use "simple" command line tools like FFMPEG & FFPLAY.  Reverse 40+ years digital video evolution -- and synthetically generate old TV snow -- to "fuzz up" a perfectly good video clip -- for fun and profit.

Quick Look Result?  Here is 8 sec simple test & project budgeting feasibility video clip -- that proved out the FFMPEG VFX-SFX approach illustrated below.

8 sec Video: Alien noise signal takeover.

Goals & Director Details?  Starting with "the script":
  • Cold open into TV "snow" (video & audio noise), just like the "old days" when the analog TV powered up on a "blank" channel.
  • Cross fade from TV snow to prefix Intro slide (changing the channel or TV station coming "in range")
  • Cross fade via TV snow from intro slide to cartoon Alien (the "episode" segment -- the Alien "take over").
  • Cross fade to trailer Outro slide then fade to black.
Video Editor -- Quick Nerd Details: Using a refurbished "junk" 7+ year old laptop -- running Linux-Knoppix 8.1 -- the following BASH commands made some digital video magic.  Please note that video resolutions are very small to (a) "fail fast, fail forward" (aid experimentation), (b) keep video processing within "reach" of old laptop -- and (c) pick up some useful visual effects during upscale to 16:9 aspect for "modern" video. Details and steps:

(1) Seed the TV snow transition/cross fade video clip: Generate a very small video & audio from random numerical sources at QQVGA resolution (160x120 pixels, 4:3 aspect):

# -- Generate 5 secs QQVGA res video. Signal? random number source
# -- Note that filter_complex comes in two parts:
# (a) Video gen: "geq=random(1)*255:128:128"
# (b) Audio gen: "aevalsrc=-1.5+random(0)"

ffmpeg -f lavfi \
       -i nullsrc=s=160x120 \
       -filter_complex "geq=random(1)*255:128:128;aevalsrc=-1.5+random(0)" \

       -t 5 out43.mkv

Playing created video "out43.mkv" -- a screen shot preview:
Synthetic Random TV snow
(video noise & audio)
(2) Up Convert & expand 160x120 video to "nearest" & "next" 16:9 video aspect -- 854x480 pixels (aka FWVGA). This step will expand & "smear out" small pixels into a more analog realistic set of "blurry" noise pixels. Audio is just copied across. Command line details:

# --- Now resize to FWVGA (smallest 16:9 aspect "std") ---
# --- Note use of "-aspect" to "force" Mplayer & others to use the 
# --- DAR (Display Aspect Ratio) and no rescale to 4:3 or some other


ffmpeg -i out43.mkv -vf scale=854:480 -aspect 854:480 out169.mkv
#
# Note and caution: By default FFMPEG version 3.3.3-4 generates 
# pixel_fmt "yuv444p" & often will NOT play on iPhones
# some versions of iMovie, QuickTime and Kodi media player 
# (Kodi just blinks & ignores) Correction? Two options:
#  (1) add command line switch "-pixel_fmt yuv420p"
#  (2) or make certain video editing & video rendering 
#      engine outputs yuv420p pixel format (OpenShot!).

Output & Screen shot result?
Noise pixels expanded to 854x480 for 
video overlay & fade to "Alien Takeover"
(3) Generate Intro & Outro slides using ImageMagick and BASH script -- this is VERY FAST and saves the video rendering engine ("melt" in OpenShot) much computational work by stacking many static images into 4-10 secs of video:

#!/bin/bash 
# ==========================================================

#  Revised:  14-Jan-2017 -- Generate Slide from Text String

# See:
#   http://www.imagemagick.org/Usage/text/
#   http://www.tldp.org/LDP/abs/html/loops1.html
#   http://superuser.com/questions/833232/create-video-with-5-images-with-fadein-out-effect-in-ffmpeg
#
#  Hint for embedded newlines (double slash):
#    ./mk_slide_command_arg.sh How Long\\nText Be?
#
# ==========================================================

if [ $1 ];then
  # echo "First Arg: " $1
  /usr/bin/convert -background black \
    -fill  yellow      \
    -size 854x480     \
    -font /usr/share/fonts/truetype/dejavu/DejaVuSans-BoldOblique.ttf \
    -pointsize 46     \
    -gravity center   \
    label:"$*"        \
slide_w_text.png
else
  echo "Need at least one predicate object token"
  echo "Try: " $0 " A_Slide_String_w_No_Quotes "
  #    ./mk_slide_command_arg.sh How Long\\nText Be?
fi

#

Slides generated appear thus:
Static Slide / Pix for Intro

Static Slide / Pix for Outro

(4) Generate "injected" Alien signal / slide -- the "episode" segment -- using a simple cartoon creation with a digital paint program -- and on 854x480 pixel canvas (16:9 aspect ratio):
"Injected" Aliend signal slide
(cartoon) from paint program
(4) Video-Edit & Merge intro, episode & outro slides -- as video segments -- and use synthetic TV snow cross fade transitions.  In this example -- OpenShot 1.4.x was used.  A screen shot from edit session -- note TV snow video is in lower track -- and "fills" when upper tracks "fade" to black. This yields the viewer impression that some nefarious "Alien" signal is being injected ("Controlling your TV!"):
ScreenShot: OpenShot 1.4.3 Video
Editor w/ time line loaded segments
and cursor at 4.0 secs
(5) Render / trans-encode video segments into a common video / audio format -- and container.  In this case libx-h264 for MP4 video and AAC for audio.  

(6) Results?  Scroll back to 8 sec YouTube clip above.  We started with the end result.  Yes, kinda "simple" and childish -- but please recall the goals:  (a) time & budget estimation for video creation & editing, (b) Avoiding wiring together a bunch of old hardware & video signal generator to make a small VFX-SFX effect, and most important (c) A simple "laugh test" to check if the promo-Ad approach was a useful "viewer outreach" path.

Lessons Learned #1:  Surprise!  iPhone, iMovie, Quicktime, Kodi Media player -- and others -- can bomb (fail) with certain color pixel formats? Apparently iPhones are strangly spooky.  Say it ain't so, Steve Jobs. Corporate software policy or institutionalized bug? See iOS and Quicktime details.  Also see significant answer by DrFrogSplat from Jan 2015.

Intro slide from 1963 TV
show "Outer Limits"
Lesson Learned #2 from creative side -- Perspectives and Deeper History: Sometimes many years of experience & exposure can confound & confuse the creative process -- and fog "simple" workflows.  Ghosts of a past technical life muddles & blocks the mind -- and reaching back to a 5 year child like mindset -- is often required. "How would a 5 year old make this 'tease' video?"
Master Control" TV 
Switcher Room
Back in fall 1980, this author worked as a "weekend program switcher" in "Master Control" for PBS station KNEW TV-3.  Lots of expensive equipment. $50K and $100K video processing "boxes" everywhere. Job was tough Saturday & Sunday gig -- 5 AM to 6 PM -- unlock, power up the studio & switching equipment, carefully energize the 100 kilowatt transmitter, setup multiple VTRs for satellite "dubs" of time-shifted TV programs (i.e. video downloads), feed promos (Ads) & TV programs to the transmitter -- all according to a ridged program schedule -- and then rinse, wash & repeat every 30-60 mins.  
1980s broadcast quality video tape: 
a 1969 RC 2" inch quadruplex VTR 
And much on and off air exposure to Sesame Street, The Electric Company, Master Piece Theater, etc etc.  To add turbulence, broadcast students would drop in to edit project videos.

Legacy Thinking? The fall 1980 weekend gig was great experience -- but filled my head with lots of legacy hardware / software / video production perspectives -- and 36+ years of technology has marched on -- all that old hardware & video production is long gone.

Or not?  Color Mathematical Models to save bandwidth?

Lesson Learned #3: Turns out the video signal theory & processing practice of analog TV & video are not 100% "gone" -- but live on in creation & editing of "modern" digital video -- even for "simple" jobs -- or for "eye catcher" web publishing-promotion -- or for distribution via YouTube / Vimeo / etc. 
TV Picture: Numerically "perfect" 
colorbars for video signal "color burst
phasing & "hitting" color calibration
points in Ostwald "color solid"
(see below)
The spirit of Analog color -- and human perceptions -- "lives on" in digital video:  While the technical details are gory and deep -- suffice to say -- playing with FFMPEG was a pleasant surprise -- I had forgot about details like YUV color space and 444 vs 420 pixel encoding.  And it was a shocking surprise that iPhones, iMovie, Apple Quicktime, Kodi media player and others can -- and do -- "blow up" with yuv444p default pixel format of FFMPEG.

From Wikipedia, HCL color model  
(double cone) and is foundation
for YUV color model of old  
analog video & "buried" in pixel
formats of digital video.
This "old" but very sophisticated YUV "blending" & "tinting" of color -- over black & white "backbone signal" -- a legacy of the 1950s TV tech transition from Black & White to Color -- is not "dead" -- but alive as an almost corporeal "ghost" in "modern" digital video editing & processing.  Why alive? Human perception of color & movement in cluster clouds of analog or digital pixels has not changed.  The hardware has change much -- but how humans "see" has not.  And exploiting how humans "see" still needs to be exploited to reduce bandwidth and video storage costs -- this demand has not change in 30+ years.

How does YUV magic happen?  Pictures are worth 1000s of words -- some old 1980s -- and some 1916 color theory -- that hint at the hardware & software technique used -- and help video renderings "reach into the mind" of the viewer -- and "exploit" their color & movement perceptions -- while also "compressing" the signals / data storage costs.

Vectorscope plot of "perfect"
colorbars test signal & scope
beam "hitting" the calibration 
marks for Red, YL, Green, CY, Blue 
and unseen black to white running
up & down the circle center.


From Wikipedia HSL and HSV "color solid";
The Vectorscope "lives" in the equatorial belt
of Ostwald's 1916 color concept & the
vectorscope "hits" specific theoretical 
colors in this solid to "stablize" video
colors.

SUMMARY:  When you are in college -- or attending formal technical school -- a lot of theory is emphasized -- and as a student -- you wonder "when are we gonna get to some practical 'hands on' manipulations?"  The value of abstract theory is often not appreciated until many years later -- when the theory allows "seasoned students" to "see the unseen" -- when "modern" software stumbles  ("Huh? iPhone cannot render YUV444P format pixels? What does that mean?") -- even if masked by 35+ years of technological change.  Viva la abstractions!