Discussion:
How do non-rpool ZFS filesystems get mounted?
(too old to reply)
Chris Siebenmann
2014-03-04 23:03:13 UTC
Permalink
I will ask my question to start with and then explain the background.
As far as I can tell from running truss on the 'zfs mount -a' in
/lib/svc/method/fs-local, this *does not* mount filesystems from pools
other than rpool. However the mounts are absent immediately before it
runs and present immediately afterwards. So: does anyone understand
how this works? I assume 'zfs mount -a' is doing some ZFS action that
activates non-rpool pools and causes them to magically mount their
filesystems?

Thanks in advance if anyone knows this.

Background:
I am having an extremely weird heisenbug problem where on boot[*] our
test OmniOS machine fails out at the ZFS mount stage with errors about:

Reading ZFS config: done.
Mounting ZFS filesystems: cannot mount 'fs3-test-01': mountmount or data is busy
cannot mount '/fs3-test-02': directory is not empty
cannot mount 'fs3-test-02/h/999': mountpoint or dataset is busy
(20/20)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a foiled: exit status 1
[failures go on]

The direct problem here is that as far as I can tell this is incorrect.
If I log in to the console after this failure, the pools and their
filesystems are present. If I hack up /lib/svc/method/fs-local to add
debugging stuff, all of the directories involved are empty (and unmounted)
before 'zfs mount -a' runs and magically present afterwards, even as 'zfs
mount -a' complains and errors out. That was when I started truss'ing
the 'zfs mount -a' itself and discovered that it normally doesn't mount
non-rpool filesystems. In fact, based on a truss trace I have during an
incident it appears that the problem happens exactly when 'zfs mount -a'
thinks that it *does* need to mount such a filesystem but finds that
the target directory already has things in it because the filesystem is
actually mounted already.

Running truss on the 'zfs mount -a' seems to make this happen much less
frequently, especially a relatively verbose truss that is tracing calls
in libzfs as well as system calls. This makes me wonder if there is some
sort of a race involved.

- cks
[*: the other problem is that the test OmniOS machine has stopped actually
rebooting when I run 'reboot'; it hangs during shutdown and must be
power cycled (and I have the magic fastboot settings turned off).
Neither this nor the mount problem used to happen; both appeared this
morning. No packages have been updated.
]
Mark Harrison
2014-03-04 23:29:42 UTC
Permalink
You mention 'directories' being empty. Does /fs3-test-02 contain empty
directories before being mounted? If so, this will be why zfs thinks
it's isn't empty and then fail to mount it. However, the child
filesystems might still mount because their directories are empty,
giving the appearance of everything being mounted OK. I'm not sure why
you're not seeing truss show zfs trying to mount non-rpool
filesystems, but it should be doing so. My wild guess right now is
that it is due to zfs checking to see if the directory is empty first,
and only showing up that it's doing something in truss if the dir
isnt' empty.

We've had this happen before when someone runs mv on a directory that
is actually the root of a filesystem. When zfs remounts it on reboot,
it gets remounted at the old location, which may or may not have other
data in it at this point (this comes up a lot when doing something
like mv foo foo.old; mkdir foo; do_stuff_with foo). I've not tracked
down the exact pathology of this when it happens, but our solution
then has basically to be to unmount all affected filesystems, then run
rmdir on all the blank directories, move any non-blank directories
aside (keep them in case they have data that needs to be kept), then
run zfs mount -a to let it clean things up.
Post by Chris Siebenmann
I will ask my question to start with and then explain the background.
As far as I can tell from running truss on the 'zfs mount -a' in
/lib/svc/method/fs-local, this *does not* mount filesystems from pools
other than rpool. However the mounts are absent immediately before it
runs and present immediately afterwards. So: does anyone understand
how this works? I assume 'zfs mount -a' is doing some ZFS action that
activates non-rpool pools and causes them to magically mount their
filesystems?
Thanks in advance if anyone knows this.
I am having an extremely weird heisenbug problem where on boot[*] our
Reading ZFS config: done.
Mounting ZFS filesystems: cannot mount 'fs3-test-01': mountmount or data is busy
cannot mount '/fs3-test-02': directory is not empty
cannot mount 'fs3-test-02/h/999': mountpoint or dataset is busy
(20/20)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a foiled: exit status 1
[failures go on]
The direct problem here is that as far as I can tell this is incorrect.
If I log in to the console after this failure, the pools and their
filesystems are present. If I hack up /lib/svc/method/fs-local to add
debugging stuff, all of the directories involved are empty (and unmounted)
before 'zfs mount -a' runs and magically present afterwards, even as 'zfs
mount -a' complains and errors out. That was when I started truss'ing
the 'zfs mount -a' itself and discovered that it normally doesn't mount
non-rpool filesystems. In fact, based on a truss trace I have during an
incident it appears that the problem happens exactly when 'zfs mount -a'
thinks that it *does* need to mount such a filesystem but finds that
the target directory already has things in it because the filesystem is
actually mounted already.
Running truss on the 'zfs mount -a' seems to make this happen much less
frequently, especially a relatively verbose truss that is tracing calls
in libzfs as well as system calls. This makes me wonder if there is some
sort of a race involved.
- cks
[*: the other problem is that the test OmniOS machine has stopped actually
rebooting when I run 'reboot'; it hangs during shutdown and must be
power cycled (and I have the magic fastboot settings turned off).
Neither this nor the mount problem used to happen; both appeared this
morning. No packages have been updated.
]
_______________________________________________
OmniOS-discuss mailing list
http://lists.omniti.com/mailman/listinfo/omnios-discuss
--
Mark Harrison
Lead Site Reliability Engineer
OmniTI
Jim Klimov
2014-03-05 08:56:50 UTC
Permalink
Post by Mark Harrison
You mention 'directories' being empty. Does /fs3-test-02 contain empty
directories before being mounted? If so, this will be why zfs thinks
it's isn't empty and then fail to mount it. However, the child
filesystems might still mount because their directories are empty,
giving the appearance of everything being mounted OK.
Just in case, such cases my be verified with df which returns the
actual mounted filesystem which provides the tested directory or file:

# df -k /lib/libzfs.so /lib/libc.so /var/log/syslog
Filesystem kbytes used avail capacity Mounted on
rpool/ROOT/sol10u10 30707712 1826637 7105279 21% /
rpool/ROOT/sol10u10/usr
30707712 508738 7105279 7% /usr
rpool/SHARED/var/log 4194304 1491 3638955 1% /var/log


This way you can test for example if a directory is "standalone"
or an actively used mountpoint of a ZFS POSIX dataset.

I think a "zpool list" can help in your debugging to see if the
pools in question are in fact imported before "zfs mount -a",
or if some unexpected magic happens and the "zfs" command does
indeed trigger the imports.
Post by Mark Harrison
As far as I can tell from running truss on the 'zfs mount -a' in
/lib/svc/method/fs-local, this *does not* mount filesystems from pools
other than rpool. However the mounts are absent immediately before it
runs and present immediately afterwards. So: does anyone understand
how this works? I assume 'zfs mount -a' is doing some ZFS action that
activates non-rpool pools and causes them to magically mount their
filesystems?
Regarding the "zfs mount -a" - I am not sure why it errors out
in your case, I can only think of some extended attributes being
in use, or overlay-mounts, or stuff like that - though such things
are likely to come up in "strange" runtime cases to mostly block
un-mounts, not in orderly startup scenarios...

Namely, one thing that may be a problem is if a directory in
question is a current-working-dir for some process, or if a file
has been created, used, deleted (while it remains open by some
process) which is quite possible for the likes of /var/tmp paths.
But even so, it is likely to block unmounts but not over-mounts
as long as the directory is (seems) empty.

Also, as at least a workaround, you can switch the mountpoint
to "legacy" and refer the dataset from /etc/vfstab including
the "-O" option for overlay-mount. Unfortunately there is no
equivalent dataset attribute at the moment, so it is not a very
convenient solution for possible trees of datasets - but may
be quite acceptable for leaf datasets where you don't need to
automate any sub-mounts.
Vote for https://www.illumos.org/issues/997 ;)

And finally, I also don't know where the pools get imported,
but "zfs mount -a" *should* only mount datasets with canmount=on
and zoned=off (if in global zone) and a valid mountpoint path,
picked from any pools imported at the moment. The mounts from
different pools may be done in parallel, so if you need some
specific order of mounts (i.e. rpool/export/home and then
datapool/export/home/user... okay, there is in fact no problem
with these - but just to give *some* viable example) you may
have to specify stuff in /etc/vfstab.

I can guess (but would need to grok the code) that something
like "zpool import -N -a" is done in some part of the root
environment preparation to prepare all pools referenced in
/etc/zfs/zpool.cache, perhaps some time after the rpool is
imported and the chosen root dataset is mounted explicitly
to anchor the running kernel.

As another workaround, you can export the pool which contains
your "problematic" datasets so it is un-cached from zpool.cache
and is not automatically imported nor mounted during the system
bootup - so that the system becomes able to boot successfully
to the point of being accessible over ssh for example. Then you
import and mount that other pool as an SMF service, upon which
your other services can depend to proceed, see here for ideas
and code snippets:

http://wiki.openindiana.org/oi/Advanced+-+ZFS+Pools+as+SMF+services+and+iSCSI+loopback+mounts

HTH,
//Jim Klimov
Chris Siebenmann
2014-03-04 23:42:40 UTC
Permalink
| You mention 'directories' being empty. Does /fs3-test-02 contain empty
| directories before being mounted?

It doesn't. All of /fs3-test-01, /fs3-test-02, /h/281, and /h/999
are empty before 'zfs mount -a' runs (I've verified this with ls's
immediately before the 'zfs mount -a' in /lib/svc/method/fs-local).

| I'm not sure why you're not seeing truss show zfs trying to mount
| non-rpool filesystems, but it should be doing so.

My truss traces on successful boot are quite definitive about this.
It clearly looks to see if a lot of fs's are mounted and finds that
they are. I've put one captured trace up here, if people are
interested:

http://www.cs.toronto.edu/~cks/t/fs-local-truss-good-boot.txt

Notice that calls to libzfs:zfs_is_mounted() return either 0 or 1.
Calls that return 0 are followed by a call to libzfs:zfs_mount() (and an
actual mount operation); calls that return 1 aren't. Clearly 'zfs mount
-a' is checking a bunch more filesystems than it actually is mounting.

(I don't know if there's a way to make truss dump the first argument
to libzfs:zfs_is_mounted() as a string so that one can see what mount
points are being checked.)

A truss from a bad boot is
http://www.cs.toronto.edu/~cks/t/fs-local-truss-bad-boot.txt

This doesn't have the libzfs trace information, just the syscalls, but
you can see a similar sequence of syscall level operations right up
to the point where it does getdents64() on /h/281 and finds it *not*
empty (a 232-byte return value instead of a 48-byte one). Based on
the information from the good trace, this is a safety check inside
libzfs:zfs_mount().

- cks
Ian Kaufman
2014-03-05 18:03:12 UTC
Permalink
Post by Chris Siebenmann
It doesn't. All of /fs3-test-01, /fs3-test-02, /h/281, and /h/999
are empty before 'zfs mount -a' runs (I've verified this with ls's
immediately before the 'zfs mount -a' in /lib/svc/method/fs-local).
As a test, try renaming those "empty" directories and then reboot. We
saw this issue with Solaris 10, where on reboot, the filesystems did
not unmount cleanly, and failed to mount at boot.

Ian
--
Ian Kaufman
Research Systems Administrator
UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
Chris Siebenmann
2014-03-05 15:50:51 UTC
Permalink
| I think a "zpool list" can help in your debugging to see if the pools
| in question are in fact imported before "zfs mount -a", or if some
| unexpected magic happens and the "zfs" command does indeed trigger the
| imports.

Sorry for not mentioning this before: a 'zpool list' before the 'zfs
mount -a' lists the pools as visible, but both df and 'mount -v' do not
report any filesystems from the two additional pools (the ones that get
mount failures and so on).

| The mounts from different pools may be done in parallel, so if you
| need some specific order of mounts (i.e. rpool/export/home and then
| datapool/export/home/user... okay, there is in fact no problem with
| these - but just to give *some* viable example) you may have to
| specify stuff in /etc/vfstab.

As far as I can tell from the Illumos code, this is not the case.
The code certainly seems to be single-threaded and it sorts the mount
list into order in a way that should put prerequisite mounts first
(eg you mount /a and then /a/b).

(This potential issue also doesn't apply to my case because all four
of the mounts from these pools are in the root filesystem, not in any
sub-filesystem.)

| I can guess (but would need to grok the code) that something
| like "zpool import -N -a" is done in some part of the root
| environment preparation to prepare all pools referenced in
| /etc/zfs/zpool.cache, perhaps some time after the rpool is
| imported and the chosen root dataset is mounted explicitly
| to anchor the running kernel.

The last time I spelunked the OpenSolaris code some years ago, the
kernel read zpool.cache very early on but only sort of half-activated
pools then (eg it didn't check to see if all vdevs were present). Pools
were brought to full activation essentially as a side effect of doing
other operations with/to them.

I don't know if this is still the state of affairs in Illumos/OmniOS
today and how such half-activated pools show up during early boot (eg
if they appear in 'zpool list', or even if simply running 'zpool list'
is enough to bring them to fully active status).

- cks
Dan Swartzendruber
2014-03-05 16:29:56 UTC
Permalink
This is all very strange. I saw stuff like this all the time when I was
using ZFS on Linux, due to timing where an HBA would not present devices
quickly enough, resulting in missing pools, missing/unmounted datasets,
etc, which would all get 'fixed' if you manually re-did them, but I've
never seen it in omniOS.
Bryan Horstmann-Allen
2014-03-05 17:17:27 UTC
Permalink
I've seen that bug on SmartOS. Fixed in the last month or two.
--
bdha
Post by Dan Swartzendruber
This is all very strange. I saw stuff like this all the time when I was
using ZFS on Linux, due to timing where an HBA would not present devices
quickly enough, resulting in missing pools, missing/unmounted datasets,
etc, which would all get 'fixed' if you manually re-did them, but I've
never seen it in omniOS.
_______________________________________________
OmniOS-discuss mailing list
http://lists.omniti.com/mailman/listinfo/omnios-discuss
Dan Swartzendruber
2014-03-05 17:29:57 UTC
Permalink
Post by Bryan Horstmann-Allen
I've seen that bug on SmartOS. Fixed in the last month or two.
Any explanation as to what was happening?
Bryan Horstmann-Allen
2014-03-05 17:46:43 UTC
Permalink
+------------------------------------------------------------------------------
| On 2014-03-05 12:29:57, Dan Swartzendruber wrote:
|
| Any explanation as to what was happening?

This is the bug I was hitting: http://smartos.org/bugview/OS-2616

Devices wouldn't be available at boot, but would once the system was up.
--
bdha
cyberpunk is dead. long live cyberpunk.
Dan Swartzendruber
2014-03-05 17:53:02 UTC
Permalink
Post by Bryan Horstmann-Allen
+------------------------------------------------------------------------------
|
| Any explanation as to what was happening?
This is the bug I was hitting: http://smartos.org/bugview/OS-2616
Devices wouldn't be available at boot, but would once the system was up.
Interesting. Thanks for posting this!
Dan McDonald
2014-03-05 17:53:32 UTC
Permalink
Post by Bryan Horstmann-Allen
+------------------------------------------------------------------------------
|
| Any explanation as to what was happening?
This is the bug I was hitting: http://smartos.org/bugview/OS-2616
Devices wouldn't be available at boot, but would once the system was up.
I believe that bugfix is in illumos-gate now as:

https://www.illumos.org/issues/4500

which was fixed by this changeset:

https://github.com/illumos/illumos-gate/commit/da5ab83fc888325fc812733d8a54bc5eab65c65c

and it *should* be in bloody now:

https://github.com/omniti-labs/illumos-omnios/commit/da5ab83fc888325fc812733d8a54bc5eab65c65c

Dan
Chris Siebenmann
2014-03-05 21:23:56 UTC
Permalink
With the aid of DTrace (and Illumos source) I have traced down what is
going on and where the race is. The short version is that the 'zfs mount
-a' in /lib/svc/method/fs-local is racing with syseventd's ZFS module.
I have a dtrace capture (well, several of them) that shows this clearly:

http://www.cs.toronto.edu/~cks/t/fs-local-mounttrace.txt

(produced by http://www.cs.toronto.edu/~cks/t/mounttrace.d which I
started at the top of /lib/svc/method/fs-local.)

Looking at various things suggests that this may be happening partly
because these additional pools are on iSCSI disks and the iSCSI disks
seem to be taking a bit of time to show up (I've never fully understood
how iSCSI disks are probed by Illumos). This may make it spiritually
related to the bug that Bryan Horstmann-Allen mentioned in that both
result in delayed device appearances.

The following is a longer explanation of the race and assumes you
have some familiarity with Illumos ZFS kernel internals.

- pools present in /etc/zfs/zpool.cache are loaded into the kernel
very early in boot, but they are not initialized and activated.
This is done in spa_config_load(), calling spa_add(), which sets
them to spa->spa_state = POOL_STATE_UNINITIALIZED.

- inactive pools are activated through spa_activate(), which is
called (among other times) whenever you open a pool. By a chain
of calls this happens any time you make a ZFS IOCTL that involves
a pool name.
zfsdev_ioctl() -> pool_status_check() -> spa_open() -> etc.

- 'zfs mount -a' of course does ZFS IOCTLs that involve pools
because it wants to get pool configurations to find out what
datasets it might have to mount. As such, it activate all
additional pools present in zpool.cache when it runs (assuming
that their vdev configuration is good, of course).

- when a pool is activated this way in our environment, some sort of
events are delivered to syseventd. I don't know enough about syseventd
to say exactly what sort of event it is and it may well be iSCSI disk
'device appeared' messages. I have a very verbose syseventd debugging
dump but I don't know enough to see anything useful in it.

- when syseventd gets these events, its ZFS module decides that it
too should mount (aka 'activate') all datasets for the newly-active
pools.

At this point a multithreaded syseventd and 'zfs mount -a' are
racing to see who can mount all of the pool datasets, creating two
failure modes for 'zfs mount -a'. The first failure mode is simply
that syseventd wins the race and fully mounts a filesystem before 'zfs
mount -a' looks at it, triggering a safety check of 'directory is not
empty'. The second failure mode is that syseventd and 'zfs mount -a'
both call mount() on the same filesystem at the same time and syseventd
is the one that succeeds. In this case mount() itself will return an
error and 'zfs mount -a' will report:

cannot mount 'fs3-test-02': mountpoint or dataset is busy

- cks
Chris Siebenmann
2014-03-05 21:38:09 UTC
Permalink
It turns out that there is an unpleasant consequence to syseventd being
willing to mount ZFS filesystems for additional pools before the 'zfs
mount -a' has run: you can get unresolvable mount conflicts in some
situations.

Suppose that you have /opt as a separate ZFS filesystem in your
root pool and you also have /opt/bigthing as a ZFS filesystem in
a second pool. You can set this up and everything looks right, but
if you reboot and syseventd beats 'zfs mount -a' for whatever reasons,
you get an explosion:

- we start with no additional filesystems mounted, including /opt
- syseventd grabs the second pool, starts mounting things, and
mounts /opt/bigthing on the *bare* root filesystem, making /opt
(if necessary) in the process.
- 'zfs mount -a' reaches /opt and attempts to mount it. However,
because syseventd has already mounted /opt/bigthing, /opt is not
empty. FAILURE.

As far as I can tell there is no particularly good cure for this. To
me it really looks like syseventd should either not be started before
fs-local (although I don't know if anything breaks if its startup is
deferred) or that it should not be mounting ZFS filesystems (although I
can half-see the attraction of it doing so).

- cks
Richard Elling
2014-03-09 02:28:21 UTC
Permalink
Post by Chris Siebenmann
It turns out that there is an unpleasant consequence to syseventd being
willing to mount ZFS filesystems for additional pools before the 'zfs
mount -a' has run: you can get unresolvable mount conflicts in some
situations.
The basic problem affects other file systems, too. The general best practice
has always been to keep your hierarchy flat. But...
Post by Chris Siebenmann
Suppose that you have /opt as a separate ZFS filesystem in your
root pool and you also have /opt/bigthing as a ZFS filesystem in
a second pool. You can set this up and everything looks right, but
if you reboot and syseventd beats 'zfs mount -a' for whatever reasons,
- we start with no additional filesystems mounted, including /opt
- syseventd grabs the second pool, starts mounting things, and
mounts /opt/bigthing on the *bare* root filesystem, making /opt
(if necessary) in the process.
- 'zfs mount -a' reaches /opt and attempts to mount it. However,
because syseventd has already mounted /opt/bigthing, /opt is not
empty. FAILURE.
As far as I can tell there is no particularly good cure for this. To
me it really looks like syseventd should either not be started before
fs-local (although I don't know if anything breaks if its startup is
deferred) or that it should not be mounting ZFS filesystems (although I
can half-see the attraction of it doing so).
... a fix would necessitate building a multi-pool dependency tree. Where
would this live?

How about if we put it in /etc?

This is effectively what vfstab does, though in a more simplistic manner: it
simply sorts the list of file systems and mounts the short path first. The difference
between vfstab and ZFS automatic mounts is that the former can be multi-pool
aware, even if it doesn't know anything about pools at all. Hence the "solution"
is ZFS mountpoint=legacy and use vfstab.
-- richard

--

***@RichardElling.com
+1-760-896-4422
Chris Siebenmann
2014-03-09 02:56:34 UTC
Permalink
| On Mar 5, 2014, at 1:38 PM, Chris Siebenmann <***@cs.toronto.edu> wrote:
| > It turns out that there is an unpleasant consequence to syseventd
| > being willing to mount ZFS filesystems for additional pools before
| > the 'zfs mount -a' has run: you can get unresolvable mount conflicts
| > in some situations.
[...]
Richard Elling:
| ... a fix would necessitate building a multi-pool dependency
| tree. Where would this live?

The thing is that ZFS already has a multi-pool dependency that works
perfectly well in this situation. 'zfs mount -a' processess all pools
at once and sorts the mount list so that /opt will be mounted before
/opt/bigthing. What makes this not work is that syseventd is willing
to mount filesystems from non-root pools before the rpool mounts have
completed (and also I believe to do pool mounts on a pool by pool basis).

At a minimum I believe that syseventd should not be mounting
filesystems from non-rpool pools before all rpool mounts have
completed. I would prefer that syseventd not do mounts at all before
/system/filesystem/local finishes.

(You cannot in general defer syseventd until afterwards because there
are a number of dependencies in SMF today that I assume are there
for good reason. I have actually inventoried these in the process of
relocating syseventd to after fs-local so I can provide a list if people
want.[*])

- cks
[*: This is where I wish SMF had a way to report the full dependency
graph in one go in some format, so you did not have to play
whack-a-mole when doing this sort of thing and also potentially
blow up your system.]
Jim Klimov
2014-03-10 15:13:06 UTC
Permalink
On 2014-03-09 03:28, Richard Elling wrote:> The basic problem affects
other file systems, too. The general best practice
Post by Richard Elling
has always been to keep your hierarchy flat. But...
That is a strange best practice, especially given that ZFS allows
and markets the ability of hierarchical datasets. But at least in
this case, this is irrelevant since Chris's setup used datasets
living just under the pool's root. Flatter than that is a private
pool per user, which is not quite the promoted ZFS way ;)
Post by Richard Elling
[*: This is where I wish SMF had a way to report the full dependency
graph in one go in some format, so you did not have to play
whack-a-mole when doing this sort of thing and also potentially
blow up your system.]
This one immediately came to mind:
"SMF Dependency Graph Generator"
https://java.net/projects/scfdot/pages/Home
https://java.net/projects/scfdot/sources/scfdot-src/show

I am not sure how alive or functional this project is today, and on
OmniOS (or any other non-Oracle distro) in particular. But IMHO it
is the best fit to your question (says so on the label ;) ).

//Jim
Richard Elling
2014-03-10 15:56:45 UTC
Permalink
On 2014-03-09 03:28, Richard Elling wrote:> The basic problem affects other file systems, too. The general best practice
Post by Richard Elling
has always been to keep your hierarchy flat. But...
That is a strange best practice, especially given that ZFS allows
and markets the ability of hierarchical datasets.
Hierarchial datasets work well. The problems occur with hierarchial pools.
-- richard
But at least in
this case, this is irrelevant since Chris's setup used datasets
living just under the pool's root. Flatter than that is a private
pool per user, which is not quite the promoted ZFS way ;)
Post by Richard Elling
[*: This is where I wish SMF had a way to report the full dependency
graph in one go in some format, so you did not have to play
whack-a-mole when doing this sort of thing and also potentially
blow up your system.]
"SMF Dependency Graph Generator"
https://java.net/projects/scfdot/pages/Home
https://java.net/projects/scfdot/sources/scfdot-src/show
I am not sure how alive or functional this project is today, and on
OmniOS (or any other non-Oracle distro) in particular. But IMHO it
is the best fit to your question (says so on the label ;) ).
//Jim
_______________________________________________
OmniOS-discuss mailing list
http://lists.omniti.com/mailman/listinfo/omnios-discuss
Continue reading on narkive:
Loading...