Walt Stoneburner's Ramblings

Currating Chaos

VueJS: Tables, Lists, and Tags

 •  • solved, VueJS, HTML

While working on a VueJS project, I noted that not all of the elements were appearing.

In fact, you can see the problem by trying this code:

<html>
<head><script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script></head>
<body>

  <div id="app">
    <ul>
      <li>TOP</li>
      <test/>
      <li>BOTTOM</li>
    </ul>
  </div>

  <script>

    const Test = {
      template: '<li>MIDDLE</li>'
    }

    const app = new Vue({
      el: '#app',
      components: { Test },
    });

  </script>
</body>
</html>

You're expecting:

  • TOP
  • MIDDLE
  • BOTTOM

But all you actually get is:

  • TOP
  • MIDDLE

Which doesn't feel right, as isn't the static text supposed to be left alone?

Well, the problem is with the <test/> element, as it should really be:

<test></test>

Self-closing components are problematic, because only official "void" elements can be self-closing.

But things can still get weird. For instance, the following code does not work as expected:

<html>
<head><script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script></head>
<body>
  <div id="app">

    <table border="1">
        <tr><td>A</td><td>B</td></tr>
        <test></test>
        <tr><td>Y</td><td>Z</td></tr>
    </table>

  </div>

  <script>
    const Test = {
      template: '<tr><td>ONE</td><td>TWO</td></tr>'
    }

    const app = new Vue({
      el: '#app',
      components: { Test },
    });
  </script>
</body>
</html>

What happens is that the middle row of the table appears before the table itself, both visually and in the DOM.

By comparison, this code works just fine — note it, too, has a container with three elements within it. So what gives?

<html>
<head><script src="https://cdn.jsdelivr.net/npm/vue/dist/vue.js"></script></head>
<body>

  <div id="app">
    <ul>
      <li>TOP</li>
      <test></test>
      <li>BOTTOM</li>
    </ul>
  </div>

  <script>

    const Test = {
      template: '<li>MIDDLE</li>'
    }

    const app = new Vue({
      el: '#app',
      components: { Test },
    });

  </script>
</body>
</html>

The answer has to DOM template parsing.

In short, just as HTML forbids some items from having a self-closing tag, it also insists that certain elements can only exist within other elements.

Let's look at the code again:

<table border="1">
    <tr><td>A</td><td>B</td></tr>
    <test></test>
    <tr><td>Y</td><td>Z</td></tr>
</table>

In this case, <test> is not a "legal" element for a <table>, so HTML hoists this "custom element" up outside of the place it's not allowed to be, putting it before the table.

The solution is to use the expected element, and then VueJS's is="..." trickery to treat it like a known component. Turning:

<test></test>

Into:

<tr is="test"></tr>

This makes the browser's HTML parser happy, and the element is treated as if it was a VueJS component named test.

You can find more details at this StackOverflow.


UPDATE: In VueJS 3 the v-is tag now evaluates as a Javascript expression! This means you need to pass it a string, as in <tr v-is="'test'"></tr>. See workarounds.

Synology: Filename Too Long

 •  • Synology, Samba, AFP, SMB, Copy

I do a lot of photography, which includes a lot of cosplay conventions that end up being multi-day all-day continuous shoots. That results in a large amount of data. To store it, I move the final archives off on to large external drives.

Suffice it to say, I've been doing this before there were affordable cloud storage services and when internet connections weren't quite as powerful (often requiring a modem). The point being, I've got a lot of old drives.

For much of my workflow, I use Apple machines. And while this has the advantage of having the next, newest, bright and shiny, it also has the disadvantage of Apple removing ports from its systems as they try to deprecate older technologies.

The combination of these two things, lots of old drives and not having ports to connect them to modern machines, has left me in the position of trying to transfer the data from an external Maxor drive using Firewire 400 to a modern Synology DS1817+ via dual-bonded 10GB ethernet.

Having the foresight to retain each older Macintosh computer that does have the missing ports, I was able to connect the drive via a series of dongles. The drive via Firewire 400 to Firewire 800 to Thunderbolt into an old MacBook Pro, which then has a second dongle from Thunderbolt to Ethernet to the Synology.

It would be great if Apple, when deprecating ports, would also sell an uber desktop dock device with all the deprecated ports on it, connecting with whatever latest and greatest high-speed technology they are offering at the time. I'm always looking for recommendations, and while the Dell USB-C Mobile Adapter (DA300) and CalDigit Thunderbolt Station 3 Plus do great for monitors, audio, media cards, and USB, both are lacking on the historic ports, which would be useful for connecting those fantastic older iSight Apple cameras.

I happened to notice as I was transferring data from the drive to the Synology that I was getting errors. Not hardware errors. Not file system corruption errors. But rather, file copy errors. Mind you, both the source disk and the Synology pass all health checks.

So I decided to copy what I could, logging the results, and deal then with the aftermath. Here's how I did that:

$ cp -rv /Volume/with/Firewire400 /Volume/of/mounted/Synology 2>&1 | tee copy-files.log
$ grep "^cp" copyfile.log               # Look for error messages from the copy command

I got two kinds of errors.

One, "Permission denied" errors. If I had used sudo it would have worked, but this was primarily the .Trashes and .Spotlight-V100 directories that I didn't care about.

Two, "File name too long". Now these I did care about. There were only about 150, which given the size of the drive wasn't that bad, but then again these were archived images I didn't want to walk away from.

Admittedly, I use very descriptive names, both for directories and for filenames. But I was fairly certain I hadn't come close.

While additional fact-checking is needed here, and specifics vary from file system to file system, a casual rule of thumb is that modern Unix-based systems have a maximum filename length of 255 characters and allows a maximum path of 4096 characters. I was no where close.

And while more fact-checking is needed here, Synology's encrypted volumes are a little wonky. They use up part of the filename length for encryption material (eCryptfs adds a prefix to the encrypted filename). Empirical evidence suggests file names themselves can then be only ~140 bytes. Matters may get worse if Unicode gets involved where a character isn't equal to just one byte. (source)

Admittedly, this can turn into a real problem for folks trying to do a full system backup, as some of the application and operating system paths do get pretty deep.

In checking my log for errors, I discovered something strange: there were longer filenames and paths that both individually, and combined, exceeded the filenames that were failing. That meant it was something about the filename or paths themselves.

So, I copied the directory from the external hard drive to the Mac. No problem.

When I tried copying the folder, now on the Mac, to the Samba mounted Synology, I got this slightly more descriptive error: "You can't copy some of these items because their names are too long or contain certain characters that are invalid on the destination volume." Interesting.

A quick survey of the files showed that they all looked normal. I did a quick check to see if there was Unicode (nope), control characters (nope), bad pathing symbols (nope), etc. Everything was normal.

And then I noticed the path: Con

Back in the old MS-DOS days, the console device was named CON:. This was a "special" filename that MS-DOS reserved for itself.

My directory was named Con because it was from a convention, which was the locale of the shoots that happened at the main event.

To test my theory, I opened Finder and went to the mounted Synology drive, created a new folder, and tried to rename it to Con. It instantly failed: 'The name "Con" can't be used. Try using a name with fewer characters, or with no special punctuation marks.' Ah ha!

I renamed the folder from Con to Convention, and the copy to the Synology worked just fine. Problem solved, but not the mystery.

Concerned this was a bug worth reporting to Synology, I tried creating a Con directory with my other computer connected to the same Synology drive and folder, and ...it worked. Ok, so what's different?

My first thought was operating systems (e.g. El Capitan vs Mojave), but it's the mounted file system that matters.

Very recently, I came to the party late in learning that Apple had deprecated their AFP protocol in favor of SMB 3, when I was investigating stalled file copies to Synology.

On the system I was doing the file transfer, I was using SMB3 for speed, reliability, and all the good things that come with it.

On the other system where I was observing the transfer, I was using AFP, primarily from a default I hadn't gotten around to changing.

So, I unmounted the drives on the AFP machine, remounted them as SMB, and then tried to create the Con directory. Surprise, the error about not being able to use that filename here appeared as well.

Conclusion

Historic "special" file names from MS-DOS, that are valid on a Mac, and Synology, can't pass through the SMB protocol (even the modern SMB3) without some sort of rejection or mutation, but can through AFP (though its deprecated).

This is dangerous. Other names do strange and terrible things. Note, these don't even have the colon after them.

  • CON is rejected.
  • COM1 to COM4 each mutate into a different filename.
  • NUL mutates into a different filename.
  • AUX mutates into a different filename.
  • LPT1 to LPT3 each mutate into a different filename.
  • PRN mutates into a different filename.

The mutated names follow patterns like CDH4BA~N. I didn't put the real mutations in the post for fear that they actually might leak my encryption key somehow.

While Win10 still prohibits those as file names, this is a Linux and Mac solution. And the 21st century. If AFP is going away, then how would one use those names? Or more importantly, why should they matter now. Can't the operating system and not the protocol do the complaining?

So what happens if you have one system mounting a shared drive with AFP and another mounting the same shared drive with SMB, and the person with the AFP drive makes a directory with one of the above names? Glad you asked. On the SMB system you see mutated names. This means the directory structure doesn't appear consistent between networking applications, like rsync.

What happens if the SMB system tries to create a directory with one of the names above? Well, to them it mutates, but to the AFP person, it surprisingly looks like the right name.

This is when things get weird. Let's say the SMB machine creates a file called AUX, but they now see a folder AZY9U2~9. The AFP machine sees AUX.

If the AFP machine also creates a folder called AZY9U2~9, they'll see that and the original AUX. But the SMB machine now sees AZY9U2~9 and AZY9U2~9 twice in the same directory listing, which is bound to cause problems, if not for the end user.

Update - Name Mangling

This adventure has led into a deeper dive of Samba than I've ever wanted. Turns out this "feature" is called Name Mangling.

Apparently the smb.conf file has this enabled by default, which I bregrudigngly admit makes sense for backwards compatability to older systems. According to this ServerFault entry one can disable it with the setting:

[data]
mangled names = no

This feels like one of those cases where you get troubled by peripheral symptoms, but the error messages are inadequate and without knowing what's going on or why, it's difficult to even know what to Google for.

Turns out someone else had this problem over on Ask Different.

Admittedly, I got to this path because a copy simply failed without explanation. If I encountered the mangled names issue, that would have been a much faster path to discovery the issue was in Samba and specifically it's name mangling.

It seems there are now three options, none good:

  1. Use AFP, although AFP is going away and has been having reliability issues for long lasting file transfer sessions.
  2. Use SMB for speed and reliability, and accept that it's going to do strange things to my filenames transparently and possibly abort by backups.
  3. Turn off name mangling, but the author of the ServerFault entrysays this will create problems when you connect via SMB to the server.

I'm guessing that all devices, everywhere, have to elect not to use name mangling, and it's hard to trust Apple won't put things back after each OS update or install.

(Elixir) iex: load vs reload

 •  • Elixir, iex

In Elixir's REPL, IEx, if you want to load a file use:

iex> import_file("relative/path/to/file.ex")

This is just as if you typed the contents of the file at the shell.

If, however, the module is already loaded, you can reload and recompile the module by name:

iex> r SomeModule

This function is meant for development and debugging purposes.

Installing Elixir Syntax Recognition into Atom

 •  • Elixir, Atom

If you're using the Atom text editor and trying to edit Elixir files (.ex), you may notice that there is no syntax detection.

Using the Atom Package Manager (apm) it is possible to install various packages from the command line.

Syntax support is added with:

apm install language-elixir

For more advanced code completion and documetnation lookup:

apm install atom-elixir

(Elixir) :observer.start - Failed to load module canberra-gtk-module

 •  • Elixir, Erlang

Elixir has the ability to call Erlang modules, and one of those modules that is handy for debugging is called Observer.

iex(1)> :observer.start

In a graphical environment, this starts up a pretty nice graphical tool, although you might get a warning message:

Gtk-Message: 17:51:14.716: Failed to load module "canberra-gtk-module"

Correcting this error message is as simple as installing the correct modules:

sudo apt install libcanberra-gtk-module libcanberra-gtk3-module

After doing this, starting observer should result in the familiar :ok and a graphical application for looking at the running system.

(Elixir) Could not find Hex

 •  • Elixir, Hex, Installer

When performing a mix deps.get command for an Elixir application, I got this prompt and error message regarding the Hex Package Manager:

Could not find Hex, which is needed to build dependency :dep_from_hexpm
Shall I install Hex? (if running non-interactively, use "mix local.hex --force") [Yn] 

16:43:44.371 [error] Task #PID<0.105.0> started from #PID<0.92.0> terminating
** (UndefinedFunctionError) function :inets.stop/2 is undefined (module :inets is not available)
    :inets.stop(:httpc, :mix)
    (mix) lib/mix/utils.ex:578: Mix.Utils.read_httpc/1
    (mix) lib/mix/utils.ex:526: anonymous fn/2 in Mix.Utils.read_path/2
    (elixir) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2
    (elixir) lib/task/supervised.ex:35: Task.Supervised.reply/5
    (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3
Function: #Function<4.48707297/0 in Mix.Utils.read_path/2>
    Args: []
** (EXIT from #PID<0.92.0>) an exception was raised:
    ** (UndefinedFunctionError) function :inets.stop/2 is undefined (module :inets is not available)
        :inets.stop(:httpc, :mix)
        (mix) lib/mix/utils.ex:578: Mix.Utils.read_httpc/1
        (mix) lib/mix/utils.ex:526: anonymous fn/2 in Mix.Utils.read_path/2
        (elixir) lib/task/supervised.ex:90: Task.Supervised.invoke_mfa/2
        (elixir) lib/task/supervised.ex:35: Task.Supervised.reply/5
        (stdlib) proc_lib.erl:249: :proc_lib.init_p_do_apply/3

Performing mix local.hex or mix local.hex --force resulted in the same message that :inets was not available.

This is a sign that while Elixir is installed, Erlang is not.

This can happen if one installs elixir via a package manager, rather than doing all the install steps.

In the case of Ubuntu 19.x, sudo apt install esl-erlang did not match a package; rather use sudo apt install erlang, which does differ from the install steps, unless you've included the Erlang Solutions repo.

Afterward, mix local.hex will work, as also mix deps.get.

Solved: Linux File Copying to Synology Stalls

 •  • Synology, Linux, Ubuntu, Transfer

You may have encountered the problem yourself, you select a bunch of files, you go to copy them to a remote network drive, things seem to be okay, and then all of the sudden the activity just stalls. Like forever.

A small number of files at a time works fine, but try to do a bulk copy or a backup, and it always aborts.

Even With Hulking Equipment

I ran into the above problem while trying to copy 98GB from a Ubuntu box to a Synology NAS.

Both the Synology and the Linux box are using dual 10GB bonded ethernet over short distances using CAT7 to a managed smart switch. Everything was using "jumbo frames" with an MTU of 9000 (the switch says 9198, as its accounting works a little differently).

So when the copy stalled and the network reported zero packet loss, it was time to look elsewhere.

Ubuntu's ps command showed that the copy command was stalled with a process state code of 'D+' (uninterruptible sleep, usually due to IO; the plus means the process was in the foreground).

At that point I began to think maybe it was the file transfer protocol. I had mounted the Synology drive with afp://.

File Transfer Protocols

Okay, first we'll cover the mistake in my thinking, incase you're under the same impressions. Then, I'll explain what's really going on.

My Misunderstanding

I had used SMB before when I was primarily using Windows system. Then I switch to Apple, and in order to get resource forks and other metafile attributes, started using AFP.

Around that time, security warnings came out saying to stay away from SMB. So I did. Also, CIFS started appearing, and some sloppy online sources said it was basically just SMB. So, I treated it as an alias of SMB.

That leads to why I was connected to my Synology from Ubuntu using AFP.

Now, there is so much wrong with the above understanding, I had to start over at ground zero.

The Many Flavors of SMB

SMB stands for Server Message Blocks, and is also known as SMB1. This is the protocol that security folks were talking about and suggesting to avoid.

AFP stands for the Apple Filing Protool, which was designed to handle Apple's metadata, SpotLight, TimeMachine, Mac's Aliases, Bonjour Services. Apparently Apple burried the lead that AFP was deprecated in OS X Mavericks. The reason being is that AFP is 32-bits. The Apple File System (APFS) uses 64-bit ids.

NFS stands for Network File System. It was deisgned for Unix/Linux and has multiple versions (v2, v3, v4.x).

SMB2 stands for Server Message Blocks 2 and is also known as CIFS (Common Internet File System); it can handle the resource forks and meta data of Apple's file systems, even the 64-bit ones. It provides network printing, shared folder authentication, file locking. (There also appears to be a SMB2 and High MTUs option on Synology.)

SMB3 stands for Server Message Blocks 3.

In broad generalities, SMB3 is the best for performance, then AFP. On the slower side, there's NFS and finally SMB1 is the worst.

The Solution

The solution was as simple as going to Synology's control panel, looking up SMB, turning it on, and setting the minimum allowed protocol to SMB2.

Then on the Linux side connecting with smb://xxx.xxx.xxx.xxx/ and not afp://xxx.xxx.xxx.xxx/.

At that point, file operations were both quick and reliable.

Hacknet for OS X

 •  • GoG, games

Purchased Hacknet on GoG and it failed to start on macOS High Sierra v10.13.6, supposedly due to having System Integrity Protection turned on, which is a good thing.

Turns out, it was possible to play the game after a little ...hacking.

$ cd /Applications/Hacknet/Hacknet.app/Contents/MacOS/  # Go to Game Directory
$ cat Hacknet   # Shows the script to start it
$ chmod u+x Hacknet.bin.osx  # Make it executable

From then on out it's just:

$ cd /Applications/Hacknet/Hacknet.app/Contents/MacOS/
$ export DYLD_LIBRARY_PATH=$DYLD_LIBRARY_PATH:./osx/
$ MONO_DEBUG=explicit-null-checks ./Hacknet.bin.osx 

Fixing Apple's Smart Keyboard Disconnects on iPad

 •  • solved, apple, ipad, solution, keyboard

I've got one of those external Apple iPad Smart Keyboards. I love everything about it except that there are times when it disconnects and I just can't seem to get the iPad to connect to it.

In fact, when I first got it and the new iPad, even the Apple Genius had serious problems getting it to connect, so much so he went to grab another new keyboard.

For a good while, it's been on and off, until I stumbled into a solution buried on an Apple Discussion formum.

Someone discovered that when they go from Airplane mode to WiFi (like in a Starbucks), the keyboard that was working just fine suddenly stops working. That's what I'm seeing.

He further deduced that the problem surfaced when, just after you connect, Apple throws up a "connection web page" where you sign in. It's at this 'dialog' where things go south.

What he discovered was that if you join the network, using the regular iPad screen's keyboard, and once connected drop everything by going into Airplane mode and right back out, you've still got your network connection and the Smart Keyboard springs to life again.

Curiously, while you're in Airplane Mode, the keyboard starts working again.

I've just tried this remotely, and it worked exactly as described.

Force Eject a Stuck Drive from the CLI

 •  • solved, OS X, macOS, unmount, diskutil

I recently ran into a problem with macOS High Sierra in which I could not eject a drive because the system thought it was being shared.

I had previously shared it, true, but the client no longer had the device mounted.

smbd was confused and rejecting the unmount.

Even using diskutil to try an unmount didn't work. The OS was trying desperately to protect me from myself.

Eventually I found the solution, which was to use the force keyword in addition to the unmount:

diskutil unmountDisk force /Volumes/Disk_Name