Boxee Box “Can’t connect to internet” fix, cloned Boxee services

The Boxee Box was a short lived but powerful set top box by D-link that was released 2010 and discontinued 2012.

All Boxee Boxes relied on an application server hosted by D-link at boxee.tv for periodic phone-home calls and service endpoints.

In June 2019 these application servers went down, resulting in all Boxee Boxes still in operation throwing “Can’t connect to internet” errors and all user profiles and apps going offline.

In August 2019 I released a small python Flask app, boxee-server-light, to replace the downed boxee.tv servers. This code was created by referencing an existing project by Jimmy Conner (cigamit, boxeed.in forums).

To use it, you’ll need to add DNS entries for all boxee application urls, pointing to the boxee-server-light application.

For example:

18.213.38.199  app.boxee.tv
18.213.38.199  api.boxee.tv
18.213.38.199  dir.boxee.tv
18.213.38.199  s3.boxee.tv
18.213.38.199  t.boxee.tv
18.213.38.199  res.boxee.tv
18.213.38.199  0.ping.boxee.tv
18.213.38.199  1.ping.boxee.tv
18.213.38.199  2.ping.boxee.tv
18.213.38.199  3.ping.boxee.tv
18.213.38.199  4.ping.boxee.tv
18.213.38.199  5.ping.boxee.tv
18.213.38.199  6.ping.boxee.tv
18.213.38.199  7.ping.boxee.tv
18.213.38.199  8.ping.boxee.tv
18.213.38.199  9.ping.boxee.tv
18.213.38.199  dl.boxee.tv

… where the IP is the address of the Flask application.

For those who are unable to run their own DNS or this application, I am hosting a public version of this code. You can add my public DNS server to your router config, or set it as custom DNS on your Boxee Box in network settings. You can also point directly to my public application server using your own DNS.

My public DNS server is by whitelist only, so please email me (I don’t check comments often) if you would like access.

Public DNS server address: 18.211.111.89
Public application server address: 18.213.38.199

For more up to date info and discussion, check out my Reddit post:
https://www.reddit.com/r/boxee/comments/ci4ugj/boxee_cloned_server_updates_working_server/

 

FAQ:

Do I need a static IP address from my ISP to use the public DNS?
Yes. I’ll need to whitelist your IP address. If you get a new one every day this won’t work.

I run my own local DNS. Do I need to be whitelisted to use your application server?
No. Map the boxee domains to my public app server as shown above. No whitelist required.

I’m logged out of my boxee box. How do I log back in while using this app?
Any username and password combo will work to log you back in.

I reset my boxee box. What firmware do I need to be using to use your public servers?
1.5.1 (latest) seems to work best. If you can’t find this firmware, email me.

Do apps work with this project?
I don’t have any apps connected yet. PRs welcome. I’m not 100% sure if app downloading will work without some additional code.



HAProxy dynamic backend selection with Lua script

HAProxy is a popular load balancer with extensive configuration options, including the ability to influence balancing and other options via Lua scripts.

In this post i’ll show how it’s possible to influence HAProxy backend selection via a Lua script. The use case for this situation was the necessity to choose a backend server based off of the responses from each possible backend.

First, Lua needs to be installed and HAProxy needs to be installed with Lua support. This involves building HAProxy with USE_LUA=1 environment var set during make.

Here’s a stripped down config example, with relevant lines commented. Not all required config attributes are included for brevity.

global
    # Load custom lua script. I usually put this alongside the haproxy.conf.
    lua-load /etc/haproxy/pick_backend.lua 

# Frontend config, rtmp traffic.
frontend frontendrtmp
    bind *:1935
    mode tcp

    # inspect-delay was required or else was seeing timeouts during lua script run
    tcp-request inspect-delay 1m

    # This line intercepts the incoming tcp request and pipes it through lua function, called "pick backend"
    tcp-request content lua.pick_backend

    # use_backend based off of the "streambackend" response variable we inject via lua script
    use_backend %[var(req.streambackend)]


# Example backends. One server per backend. The Lua script will iterate through all backends
# with "backendrtmp" prefix. 
# HAProxy use_server attribute does not yet support lua scripts, so backends necessary.
backend backendrtmp1
    mode tcp
    server rtmp 123.456.789.0:1935 check

Requests to the “frontendrtmp” frontend are routed through the Lua script, which checks each listed backend and chooses one based off its response.

Here’s the Lua script:

local function pick_backend(txn)
    winner_name = 'backendrtmp1' -- Needs to match available backend.
    winner_count = -1 ---initial count flag

    for backend_name ,v in pairs(core.backends) do
      if (backend_name ~= 'MASTER') then -- Filter out built in backend name
        -- iterate backend servers dict, assuming one server per backend.
        for server_name, server in pairs(v.servers) do
          -- Skip any server that is down.
          if server:get_stats()['status'] ~= 'DOWN' then
            address = string.match(server:get_addr(), '%d+.%d+.%d+.%d+')
            local tcp = core.tcp()
            tcp:settimeout(1)

            -- Connect to rtmp server to get stats counts.
            if tcp:connect(address, 80) then
              if tcp:send('GET /statistics\r\n') then
                local line, _ = tcp:receive('*a')

                -- Do whatever checks you want here with the response.
                -- In this case, i'll just check the number returned
                -- from the statistics endpoint.
                streamers = tonumber(string.match(line, '(%d+)'))

                -- Check and set winner.
                if (winner_count == -1) then
                  print('Set initial backend', backend_name)
                  winner_count = streamers
                  winner_backend = backend_name
                else
                  if (streamers < winner_count) then
                    print('New winner', backend_name)
                    winner_count = streamers
                    winner_backend = backend_name
                  end
                end
              end
              tcp:close()
            else
              print('Socket connection failed')
            end
          end
        end
      end
    end
    print('Winner is:', winner_backend)

    -- Set winner backend name to variable on the request.
    txn:set_var('req.streambackend', winner_backend)
end

core.register_action('pick_backend', {'tcp-req', 'http-req'}, pick_backend)

The Lua script:

  • Iterates over each backend with the required prefix
  • Hits an endpoint on the listed server if it's up
  • Checks count from endpoint
  • Compares with count of previous lowest count server
  • Sets a response variable with the name of the backend with the lowest count

This enables us to route traffic dynamically to the server with the lowest number of users.

Regression testing releases with Depicted (Dpxdt), Travis, & Saucelabs

Depicted is a release testing tool that compares before and after screenshots of your webpage, highlighting differences between the two.

Depicted supplements your release testing by allowing you to approve any visual changes a new release may cause.

I wrote a script during my time at Sprintly that would take a TravisCI build ID, pull related screenshots from our Saucelabs selenium tests, and upload them to a Depicted API server for comparison.

Before a new release would be deployed, we would manually run our Depicted release script and check and approve any changes.

This script was integrated as a Django management command for ease of use. Check out the full script below with comments.

Django, Redis & AWS ElastiCache primary/replica cluster

AWS’s ElastiCache service is a convenient way to launch a Redis cluster. If you’re using Django, both django-redis-cache and django-redis packages support an ElastiCache Redis instance. If you are launching ElastiCache Redis with any amount of replicas, some additional master-slave configuration is needed in your Django settings.

Here is an example of an ElastiCache Redis cluster with a primary instance and two replicas:

The following is an example of the correct settings for this cluster if you’re using django-redis-cache backend:

CACHES = {
    'default': {
        'BACKEND': 'redis_cache.RedisCache',
        'LOCATION': [
            "test-001.730tfw.0001.use1.cache.amazonaws.com:6379",
            "test-002.730tfw.0001.use1.cache.amazonaws.com:6379",
            "test-003.730tfw.0001.use1.cache.amazonaws.com:6379"
        ],
        'OPTIONS': {
            'DB': 0,
            'MASTER_CACHE': "test-001.730tfw.0001.use1.cache.amazonaws.com:6379"
        },
    }
}

https://django-redis-cache.readthedocs.io/en/latest/advanced_configuration.html#master-slave-setup

Apache Kafka plaintext authentication and kafka-python configuration reference

Apache Kafka config settings and kafka-python arguments for setting up plaintext authentication on Kafka.

You’ll need to follow these instructions for creating the authentication details file and Java options.

I exposed the auth endpoint to port 9095. All other ports were closed via AWS security groups.

Kafka config settings:

security.inter.broker.protocol=PLAINTEXT
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN
advertised.listeners=SASL_PLAINTEXT://example.com:9095,PLAINTEXT://example.com:9092
listeners = SASL_PLAINTEXT://0.0.0.0:9095,PLAINTEXT://0.0.0.0:9092

kafka-python client command for connecting:

from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers='example.com:9095', security_protocol="SASL_PLAINTEXT", sasl_mechanism='PLAIN', sasl_plain_username='username', sasl_plain_password='password')

GitHub Desktop & GPG issues “gpg failed to sign the data”

I had some issues while trying to get GPG signing working while using GitHub Desktop. While their docs say the application doesn’t support GPG, a bunch of users seemed to have it working.

I ran into a few errors and got it working correctly. A “gpg failed to sign the data” error is what took a while to find a fix for.

Assuming you followed all the instructions in GitHub’s docs, also make sure your global git settings are pointing to the gpg command and signing is set to true:

user.signingkey=EEDDA4EE375C6D12
gpg.program=/usr/local/bin/gpg
commit.gpgsign=true

And what ultimately fixed my issue was disabling GPG terminal output via:

echo "no-tty" >> ~/.gnupg/gpg.conf

Firefox & python Selenium: Stopping auto update on browser test runs

Found a minor annoyance when running headless selenium browser tests on Ubuntu server 16. For some reason automated tests would start failing when opening Firefox. Apparently the configuration i’m running allows for Firefox to run auto update when opened.

To stop Firefox auto updates during your python Selenium test run, load a custom profile:

from selenium import webdriver

profile = webdriver.FirefoxProfile()
profile.set_preference('app.update.auto', False)
profile.set_preference('app.update.enabled', False)

driver = webdriver.Firefox(profile)

If this doesn’t seem to do the trick, verify that apt unattended-upgrades are not causing this behavior. In one case, I saw that the update was happening in the /var/log/unattended-upgrades/unattended-upgrades-dpkg.log log file.

I disabled auto updates via apt globally with the command:

dpkg-reconfigure -plow unattended-upgrades

npm install causing “No space left on device” Pivotal Cloud Foundry

Had an issue while deploying to a 2GB Pivotal Cloud Foundry instance during npm install buildpack step. My instance was limited to 2GB disk space, but somehow my 100mb application was filling up the drive.

Error was along the lines of:

cp: cannot create directory ‘/tmp/contents258369199/...’: No space left on device

Turns out npm’s package cache is on by default, and was saving all installed packages to a cache directory on disk. So if you’re limited on disk space during a build, turn this off with the environment variable:

NODE_MODULES_CACHE=false

Elastic Beanstalk npm install error EACCES: permission denied, mkdir ‘/home/webapp’

Ran into an error while deploying to an AWS Elastic Beanstalk instance set up for Rails + Node + Webpack.

Full error during npm install:

npm install
npm ERR! Linux 4.4.23-31.54.amzn1.x86_64
npm ERR! argv "/opt/elasticbeanstalk/support/node-install/node-v4.6.0-linux-x64/bin/node" "/usr/bin/npm" "install"
npm ERR! node v4.6.0
npm ERR! npm  v2.15.9
npm ERR! path /home/webapp
npm ERR! code EACCES
npm ERR! errno -13
npm ERR! syscall mkdir

npm ERR! Error: EACCES: permission denied, mkdir '/home/webapp'
npm ERR!     at Error (native)
npm ERR!  { [Error: EACCES: permission denied, mkdir '/home/webapp']
npm ERR!   errno: -13,
npm ERR!   code: 'EACCES',
npm ERR!   syscall: 'mkdir',
npm ERR!   path: '/home/webapp',
npm ERR!   parent: 'testing-app' }
npm ERR!
npm ERR! Please try running this command again as root/Administrator.

After some Googling, this Stack Overflow thread gave me some clues: Deploying a Rails / Ember app on AWS Elastic Beanstalk

The /home/webapp directory needed to be created during deploy. This was accomplished using a .ebextensions config with a commands stanza. Docs on commands here

The final commands I used are as follows:

commands:
  01_mkdir_webapp_dir:
    command: mkdir /home/webapp
    ignoreErrors: true
  02_chown_webapp_dir:
    command: chown webapp:webapp /home/webapp
    ignoreErrors: true
  03_chmod_webapp_dir:
    command: chmod 700 /home/webapp
    ignoreErrors: true

Ubuntu server 12.0.4 SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure

Ran into some trouble with Ubuntu 12.0.4 and an “SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure” error while accessing the Zoho Docs API through Python. Apparently Zoho’s API endpoint enforces a certain SSL version. The full python error was:

SSLError: [Errno 1] _ssl.c:504: error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure

After ruling out python and openssl, the fix was to simply update to Ubuntu 12.0.4.1 through apt:

apt-get update
apt-get upgrade

And for the hell of it did a distribution upgrade:

apt-get dist-upgrade