Recently, I'm on a server that chrony just barfs on, so my preferred time mechanism doesn't work. Indeed, ntpdate doesn't work as well. Turns out that ntpdate is obsolete (or being obsoleted). After a lot of nonsense, it turns out that running ntpd from the commandline is possible, as long as it is not currently running as a service.
First, let's install NTP.
Install and Configure NTP on CentOS
sudo yum -y install ntp
sudo service ntpd start
sudo chkconfig ntpd on
Stop NTPD and Run Manually on CentOS
sudo service ntpd stop
sudo ntpd -gq
sudo service ntpd start
Spending time looking into AMI (Amazon Linux), it is as usual with the plethora of Amazon products, sometimes hard to get info about what it is. I take this not as a bad thing (though it does take time) but rather a feature that emerges from the *let's develop lots of stuff all at once" product management genius.
The clearest explanation of AMI in relation to other distributions was found on the SaltStack site, as:
> Salt should work properly with all mainstream derivatives of Red Hat Enterprise Linux, including CentOS, Scientific Linux, Oracle Linux, and Amazon Linux.
This immediately brought to mind the sense that RHEL (and CentOS) are to my knowledge never combined with these other distributions when counting them up. Counting of course does not matter, but it is important when trying to visualize linux for the Enterprise what choices are being made. Ubuntu has a lot of visibility, especially when it comes to configuring and deploying a VPS for small projects. This visibility tends to obscure the latent reality of CentOS, Oracle, Scientific, and increasing in importance, Amazon Linux.
Looking at Ansible, the Red Hat deployment tool:
> Amazon Linux AMI is mostly compatible with CentOS, but it uses a different version approach, which means that most of those Ansible roles will ignore or complain about not supporting Amazon AMI.
One can also use CentOS on Amazon AWS for a more vanilla approach, though Amazon Linux AMI is tuned especially for EC2.
scp, the secure version of cp, aka copy, is pretty great, since it is straightforward to copy one or more files or directories from any one machine to any other machine (and the command could be running on a third machine).
Note: it is best to use sudo, and full paths for everything.
Second note: for using a private key at the destination, declare that first, and then the local or login+remote file(s).
scp (-r = recursive) (-i private key) (user@server:directory/file) or (/local/directory/file) (-i private key) (user@server:directory/file) or (/local/directory/file)
- optional -r for recursive directory structure
- optional -i for optional identity file aka private key
- Use the full path for the identity file (this file needs to be local to the machine on which scp is being run
- either a user@server: or a /local/directory/file for the source
- Note that there is no slash after the colon before the first directory name on a remote location
- either a user@server: or a /local/directory/file for the destination
> Note: as of 2017 I no longer use webmin or virtualmin, though I still believe it is much better than Cpanel and other related.
This page will not be updated over time with information about Webmin and Virtualmin, and perhaps Usermin, and Cloudmin. Basically Webmin and Virtualmin are similar in functionality to the more widely known WHM and Cpanel. My experience is that Webmin and Virtualmin are superior in a variety of ways, but obviously use case will likely dictate what that means for any given individual.
Advantages of Webmin and Virtualmin
Webmin runs its own webserver miniserv.pl, and while that takes some memory, it is much better when there are big problems with the apache or nginx installation (that is, your webserver admin tools aren't also not working).
Webmin and Virtualmin is way better at managing Apache, as opposed to the misbegotten so-called EasyApache.
"cPanel & WHM does (sic) not require that you use EasyApache, but it provides a convenient and easy method to modify your web server." #fail
Not everything is perfect in the land of Webmin and Virtualmin. Here are some notes:
To get backups that will not overwrite each other have to use a path that includes a date such as /%d-%m-%Y, and also enable **Do strftime-style time substitutions on file or directory name **.
Virtualmin backup S3 support is nice, but the Webmin backup doesn't have S3
Apparently Webmin and Virtualmin rely on the existence of (have dependencies or are dependent upon) Postfix, Mailman, and Dovecot. Perhaps not all three, and perhaps a few others. This is not quite true, but some error messages do crop up at various points.
However, these don't actually have to be running (they can be disabled and not start at boot). It may be possible to remove the other remnants that depend on the mail applications, and then finally remove them.
> 05-Nov-2016 - Note: I've reached the conclusion that I will entertain no more underpowered devices as they are ultimately so limited their return on investment vastly underperforms overpowered devices. This means Intel Compute Sticks are no longer acceptabe acceptable. The Intel NUC is now what I consider to be the future of the desktop.
I've seen the future of the desktop, and it fits in your pocket. Sure, we've all seen this, it's called a mobile phone or mobile device. But really, the desktop, by its nature and definition, sits on a desk top. But that really means that a decent-sized monitor, a keyboard and mouse sit there.
As we've seen with all-in-ones, the computer can basically live inside the monitor casing. The one advantage is that there are no unsightly cables, or large form factor to get in the way, much less dust and a greater power footprint. With much smaller systems, usually size is counteracted by cost, though netbooks were a nice happy medium (and could be connected to a larger display).
How about something even smaller, and yes while power and capacity are still fairly low, the cost issue is beginning to vanish, since we are now dealing with almost off-the-shelf processors (Atom, CoreM).
Intel Compute Stick
My Intel compute stick was purchased at a local retailer. They have good prices, but in Thailand computers are generally a little more expensive, and this one did not contradict. Everything would be a little cheaper in the US, but still this seems reasonable, and without the need to order and import via ebay.
The cost for a 2gb ram/32gb ssd, Atom processor, and Windows 10 Home edition included, was 5,700 THB. While the ram is not upgradeable and an Atom is a bit slow, the experience was fairly good. Sure Windows is a bit odd, but for this price being able to test out
AOC LED IPS Monitor
Add on a 22" AOC led ips monitor for 4,200 THB, and a wireless logitech keyboard/mouse for another 1,290 THB. Total is under 12,000 THB / $ 350 USD and is available inside Thailand today (as of 01 July 2016). Ridiculously reasonable for a great monitor, adequate keyboard and mouse, and a supremely small but functional computer.
This being the future of the desktop. All-in-ones are simply not worth the prices when best of breed components are available for significant savings.
A roadmap of the Intel Compute Stick is interesting.
From what I can tell there are some nice chips coming later in 2016. Unclear on pricing, which will make a big difference, but everything with 4gb is targeting a desktop, though headless servers are also an interesting idea.
Windows Versions (and Linux)
Some of the older Compute Sticks come with Windows 8.1 and either with or without an upgrade to Windows 10. Others come with Windows 10 installed. Check to ensure as the very same model numbers can have different OS versions. Oh, and yes Linux can be installed on these. (There is a crappier spec version of the Intel Compute Stick which has Linux pre-installed, but who wants 1gb ram, 16gb sdd? Ok, for $50 USD you could have a dedicated machine on the home lan, true enough.)
This is meant to be a note, rather than some hazy -- or clear-eyed view. What is interesting to me is the long-awaited 16.04 Ubuntu which promises to have most of the stuff needed for the next few years of tinkering.
It is also interesting to see the development of distributions such as ClearOS (a CentOS-based distribution focused on sysadmin) and ClearLinux (Intel's container-focused distribution meant to take on CoreOS.
As I've been a primary OSX desktop since mid-2011, the Ubuntu Unity desktop looks cozy. While that is not the popular opinion among intermediate and advanced users, the desktop look-and-feel is really an issue of personalization. The battles I like to fight are more about open source (yes, there are issues with Ubuntu) and performance and control (Apple you can go fcuk off now).
In any case, it is probably a good idea to switch operating systems every five years or so (that was when I discarded Windows like a cheap suit).
For a netbook use-case, I still think ChromeOS and Chromebooks are a great idea (though try to do anything more than a few web pages, and it is very limited), and for a tablet, well I just don't use one. It is meant more for consumption, and I do that better on the laptop and the bigger screens, or on-the-go on my iPhone 5 (still the best one handed mobile device I've encountered).
The Ubuntu approach of one OS on all devices is really a great approach, though the devil, as always, is in the details. In the meantime, in terms of Linux, we are still very much in a two-horse town: CentOS and Ubuntu. Everything else is just detail, or a least-significant bit. This is for the server.
Unfortunately there are some basic divergences such as how something as central as Apache configuration files are re-arranged under Debian/Ubuntu as of 2.4x (Apache2) vs. REHL/CentOS (httpd). This has the nasty problem of making what should be OS-agnostic Apache configuration scripts break significantly. See a2enconf and the CentOS apache directives doc.
For the desktop, unfortunately there are major problems on the linux desktop. Of course some of those same problems (and others) plague other operating systems, but that does not make them magically irrelevant.
At some point, looking at the next 5 years, it is likely to go fully Unix on all devices (without having to resort to Android's insecure, buggy and bloatware OS). Let's count the unices:
- Almond+ router (OpenWRT)
- Kindle Paperwhite (KindleOS), oh and lots of Kindle hacks
- Macbook Air 2011 (Ubuntu 16.04, coming)
- MacMini 2011 (Ubuntu 16.04, coming)
I think there are some more embedded Linux in various devices around the house.
fuser and lsof are two important tools showing which processes are using what files and/or sockets. Fuser is the simpler one focused on finding out what processes are using a given file (or port), while lsof can list processes without specifying, such as all open sockets
fuser stands for file user and lists process IDs of all processes that have one or more file open.1
fuser -k filename
To kill all processes using tcp port 80
fuser -k -n tcp 80
Install on RPM as follows:
yum install -y psmisc
apt-get --yes install fuser
Usage: fuser [-fMuvw] [-a|-s] [-4|-6] [-c|-m|-n SPACE] [-k [-i] [-SIGNAL]] NAME...
Show which processes use the named files, sockets, or filesystems.
-a,--all display unused files too
-i,--interactive ask before killing (ignored without -k)
-k,--kill kill processes accessing the named file
-l,--list-signals list available signal names
-m,--mount show all processes using the named filesystems or block device
-M,--ismountpoint fulfill request only if NAME is a mount point
-n,--namespace SPACE search in this name space (file, udp, or tcp)
-s,--silent silent operation
-SIGNAL send this signal instead of SIGKILL
-u,--user display user IDs
-v,--verbose verbose output
-w,--writeonly kill only processes with write access
-V,--version display version information
-4,--ipv4 search IPv4 sockets only
-6,--ipv6 search IPv6 sockets only
- reset options
**lsof** stands for *list open files*. Just as with *fuser* open files include disk files, named pipes, network sockets, and devices opened by all processes. A way of determining which files are opened by which processes on a given volume is for example:
Install on RPM as follows:
yum install -y lsof
apt-get --yes install lsof
## LSOF Usage
A handy usage is the list of tcp or udp processes, for example:
lsof -i udp
Full usage mess
usage: [-?abhKlnNoOPRtUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]]
[+|-e s] [-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M]
[-o [o]] [-p s] [+|-r [t]] [-s [p:s]] [-S [t]] [-T [t]] [-u s]
[+|-w] [-x [fl]] [--] [names]
Defaults in parentheses; comma-separated set (s) items; dash-separated ranges.
-?|-h list help
-a AND selections (OR)
-b avoid kernel blocks
-c c cmd c ^c /c/[bix]
+c w COMMAND width (9)
+d s dir s files
-d s select by FD set
+D D dir D tree *SLOW?*
+|-e s exempt s *RISKY*
-i select IPv files
-K list tasKs (threads)
-l list UID numbers
-n no host names
-N select NFS files
-o list file offset
-O no overhead *RISKY*
-P no port names
-R list paRent PID
-s list file size
-t terse listing
-T disable TCP/TPI info
-U select Unix socket
-v list version info
-V verbose search
+|-w Warnings (+)
-X skip TCP and UDP* files
-Z Z context [Z]
-- end option scan
+f|-f +filesystem or -file names
-F [f] select fields; -F? for help
+|-L [l] list (+) suppress (-) link counts
+m [m] use|create mount supplement
+|-M portMap registration (-)
-o o o 0t offset digits (8)
-p s exclude(^)|select PIDs
-S [t] t second stat timeout (15)
-T qs TCP/TPI Q,St (s) info
-g [s] exclude(^)|select and print process group IDs
-i i select by IPv address:
+|-r [t[m]] repeat every t seconds (15); + until no files, - forever.
An optional suffix to t is m; m must separate t from and
is an strftime(3) format for the marker line.
-s p:s exclude(^)|select protocol (p = TCP|UDP) states by name(s).
-u s exclude(^)|select login|UID set s
-x [fl] cross over +d|+D File systems or symbolic Links
names select named files or files on named file systems
Anyone can list all files; /dev warnings disabled;
kernel ID check disabled.