Page 4 of 10
Below is are live mirrors of my “PSU Hacking Club” Ubuntu repos.
https://github.com/psu-hacking/iso-gen
[github_readme repo=”psu-hacking/iso-gen”]
https://github.com/psu-hacking/static-site
[github_readme repo=”psu-hacking/static-site”]
Also, check out the evolving PSU Hacking Club wiki here!
xD – Jess
GIS Updates:
Newish Raster / DEM image → STL tool in the Shiny-Apps repo:
https://github.com/Jesssullivan/Shiny-Apps
See the (non-load balanced!) live example on the Heroku page:
https://kml-tools.herokuapp.com/
Summarized for a forum member here too: https://www.v1engineering.com/forum/topic/3d-printing-tactile-maps/
CAD / CAM Updates:
Been revamping my CNC thoughts-
Basically, the next move is a complete rebuild (primarily for 6061 aluminum).
I am aiming for:
- Marlin 2.x.x around either a full-Rambo or 32 bit Archim 1.0 (https://ultimachine.com/)
- Dual endstop configuration, CNC only (no hotend support)
- 500mm2 work area / swappable spoiler boards (~700mm exterior MPCNC conduit length)
- Continuous compressed air chip clearing, shop vac / cyclone chip removal
- Two chamber, full acoustic enclosure (cutting space + air I/O for vac and compressor)
- Full octoprint networking via GPIO relays
FWIW: Sketchup MPCNC:
https://3dwarehouse.sketchup.com/model/72bbe55e-8df7-42a2-9a57-c355debf1447/MPCNC-CNC-Machine-34-EMT
Also TinkerCAD version:
https://www.tinkercad.com/things/fnlgMUy4c3i
Electric Drivetrain Development:
BORGI / Axial Flux stuff:
https://community.occupycars.com/t/borgi-build-instructions/37
Designed some rough coil winders for motor design here:
https://community.occupycars.com/t/arduino-coil-winder/99
Repo: https://github.com/Jesssullivan/Arduino_Coil_Winder
Also, an itty-bitty, skate bearing-scale axial flux / 3-phase motor to hack upon:
https://www.tinkercad.com/things/cTpgpcNqJaB
Cheers-
– Jess
The first ones to arrive in MA, brush up!
Palm warbler
This is usually the first one to arrive. Gold bird, medium sized warbler, rufus hat. When they arrive in MA they are often found lower than usual / on the ground looking for anything they can munch on. Song is a rapid trill. More “musical / pleasant” than a fast chipping sparrow, faster than many Junco trills.
Pine warbler
Slimmer than palm, no hat, very slim beak, has streaks on the breast usually. Also a triller. They remain higher in the trees on arrival.
Yellow-rumped warbler
Spectacular bird, if it has arrived you can’t miss it- also they will arrive by the dozen so worth waiting for a good visual. These also trill, which is another reason it is good to get a visual. The trill is slow, very “sing-song”, and has a downward inflection at the end. If there are a bunch sticking around for the summer, try to watch some sing- soon enough you will be able to pick out this trill from the others.
— Yellow warbler says “sweet sweet sweet, I’m so Sweet!” and can get a bit confusing with Yellow-rumped warbler
— Chestnut-sided warbler says “very very pleased to meet ya!” and can get a bit confusing with Yellow warbler
Black-and-white warbler
Looks like a zebra – always acts like a nuthatch (clings to trunk and branches). This one trills like a rusty wheel. It can easily be distinguished after a bit of birding with some around.
American Redstart
Adult males look like a late 50’s hot-rodded American muscle car: long, low, two tone paint job. Matte/luster black with flame accents. Can’t miss it. The females and young males are buff (chrome, to keep in style I guess) with yellow accents. Look for behavior- if a “female” is getting beaten up while trying to sing a song in the same area, it is actually a first year male failing to establish a territory due to obviously being a youth.
Cheers,
– Jess
Joplin for all your Operating Systems and devices
As a lifelong IOS + OSX user (Apple products), I have used many, many notes apps over the years. From big name apps like OmniFocus, Things 3, Notes+, to all the usual suspects like Trello, Notability, Notemaster, RTM, and others, I always eventually migrate back to Apple notes, simply because it is always available and always up to date. There are zero “features” besides this convenience, which is why I am perpetually willing to give a new app a spin.
Joplin is free, open source, and works on OSX, Windows, Linux operating systems and IOS and Android phones.
Find it here:
brew install joplin
The most important thing this project has nailed is cloud support and syncing. I have my iPhone and computers syncing via Dropbox, which is easy to setup and works…. really well. Joplin folks have added many cloud options, so this is unlikely to be a sticking point for users.
Here are some of the key features:
- Markdown is totally supported for straightforward and easy formatting
- External editor support for emacs / atom / etc folks
- Extra bit to troubleshoot possible issue for hardcore emacs users:
- https://discourse.joplin.cozic.net/t/note-for-emacs-users/623
- Layout is clean, uncluttered, and just makes sense
- Built-in markdown text editor and viewer is great
- Notebook, todo, note, and tags work great across platforms
- Browser integration, E2EE security, file attachments, and geolocation included
Hopefully this will be helpful.
Cheers,
– Jess
My computer recently crashed very, very hard, while I was removing an small empty alternative OS partition I no longer needed. This is a fairly mundane operation that I do now and again, and is a ongoing fight to keep at least a few gigs of space free for actual work on precious 250gb Mac SSD.
The crash results? Toasted GPT tables all around. My 2015 computer’s next move was to reboot- only to find essentially no partitions of memory… at all. What it did show was (wait for it) Clover bootloader of all things, with a single windows boot camp icon (nothing in there either). That is so wrong…. On all levels!
I brought the machine to the local university repair. They declared this machine bricked and offered to wipe it. Back to me it came…
I scheduled an Apple support session with a phone rep, which after around 45 minutes of actually productive troubleshooting ideas (none helping though) was forwarded to a senior supervisor. She was interested in this problem, and we scheduled a larger block of time. But, in the meantime, I still wanted to try again….
How to recover a garbled GPT table for Mac OSX:
Start with clean SMC and PRAM / NVRAM.
Clearing these actually made accessing internet recovery (how we get to a stand-in OS with a terminal) dozens of times faster. 2.5 hours to 7 minutes. I actually waited 2.5 hours twice on separate attempts before I cleared these.
Follow these Apple links to perform these operations:
https://support.apple.com/en-us/HT204063
https://support.apple.com/en-us/HT201295
Get the computer with a text editor open.
Restart the computer into internet recovery. Command + R or Command + Shift + R.
Wait.
Open a Terminal. The graphical disk utility is useless because the disk / partition we want is unreachable(so it will say everything is great).
Run:
diskutil list
For me, I see disk0s2 is 180.6 gb. That’s my stuff!
I also found /dev/disk2 → /dev/disk14 to be tiny partitions- don’t worry about those.
The syntax you are looking for is:
Name: “untitled” Identifier: disk#
(NOT disk#s#)
Write down ALL of the above information for the disk you are after. That is probably disk0.
Then:
gpt -r show disk0
Copy the following readout in your terminal for all entries bigger than “32”. The critical fields here are Start, Size, Index, and Contents. Each field is supremely important.
Here is mine (formatted for web):
# Disk0, with contents > “32” :
# First Table:
Start: 40
Size: 409600
Index: 1
Contents: C12A7328-F81F-11D2-BA4B-00A0C93EC93B
# Second table, the one with my data:
Start: 409640
Size: 352637568
Index: 2
Contents: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF
Note, this is the initial Contents. I rewrote this once with the correct Apple Index 2 data but did not create a new table (leaving the rest of the broken bits broken). We are replacing / destroying a table here, but not the data.
Actions:
# unmount the disk. From here we are doing tables, not disks / data.
diskutil unmountDisk disk0
# Get rid of the GPT on the disk we are recovering. We are not touching the data.
gpt destroy disk0
# Make a new one to start with some fresh values.
gpt create -f disk0
# perform magic trick
# USE THE DATA YOU WROTE DOWN FROM “gpt -r show disk0”. THIS IS IMPORTANT.
# we must add that first small partition at index 1. Verbatim.
gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0
# index two (for me) is my data. We are going to use the default OSX / Mac HD partition values.
# the Length of “372637568” is not as sure fire as the GPT Contents.
# YMMV, but YOLO.
gpt add -i 2 -b 409640 -s 372637568 -t 7C3457EF-0000-11AA-AA11-00306543ECAC disk0
Again, that Contents value is 7C3457EF-0000-11AA-AA11-00306543ECAC.
– Jess
written in the recovered computer xD
View below the readme mirror from my Github repo. Scroll down for my Python3 evaluation script.
….Or visit the page directly: https://github.com/Jesssullivan/ChapelTests
[github_readme repo=”Jesssullivan/ChapelTests”]
Now Some Python3 Evaluation:
# Ajacent to compiled FileCheck.chpl binary:
python3 Timer_FileCheck.py
Timer_FileCheck.py will loop FileCheck and find the average times it takes to complete, with a variety of additional arguments to toggle parallel and serial operation. The iterations are:
ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
-
Default – full parallel
-
Serial evaluation (–SE) but parallel domain creation
-
Serial domain creation (–SP) but parallel evaluation
-
Full serial (–SE –SP)
Output is saved as Time_FileCheck_Results.txt
- Output is also logged after each of the (default 10) loops.
The idea is to evaluate a “–flag” -in this case, Serial or Parallel in FileCheck.chpl- to see of there are time benefits to parallel processing. In this case, there really are not any, because that program relies mostly on disk speed.
Evaluation Test:
# Time_FileCheck.py
#
# A WIP by Jess Sullivan
#
# evaluate average run speed of both serial and parallel versions
# of FileCheck.chpl -- NOTE: coforall is used in both BY DEFAULT.
# This is to bypass the slow findfiles() method by dividing file searches
# by number of directories.
import subprocess
import time
File = "./FileCheck" # chapel to run
# default false, use for evaluation
SE = "--SE=true"
# default false, use for evaluation
SP = "--SP=true" # no coforall looping anywhere
# default true, make it false:
R = "--R=false" # do not let chapel compile a report per run
# default true, make it false:
T = "--T=false" # no internal chapel timers
# default true, make it false:
V = "--V=false" # use verbose logging?
# default is false
bug = "--debug=false"
Default = (File, R, T, V, bug) # default parallel operation
Serial_SE = (File, R, T, V, bug, SE)
Serial_SP = (File, R, T, V, bug, SP)
Serial_SE_SP = (File, R, T, V, bug, SP, SE)
ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
loopNum = 10 # iterations of each runTime for an average speed.
# setup output file
file = open("Time_FileCheck_Results.txt", "w")
file.write(str('eval ' + str(loopNum) + ' loops for ' + str(len(ListOptions)) + ' FileCheck Options' + "\n\\"))
def iterateWithArgs(loops, args, runTime):
for l in range(loops):
start = time.time()
subprocess.run(args)
end = time.time()
runTime.append(end-start)
for option in ListOptions:
runTime = []
iterateWithArgs(loopNum, option, runTime)
file.write("average runTime for FileCheck with "+ str(option) + "options is " + "\n\\")
file.write(str(sum(runTime) / loopNum) +"\n\\")
print("average runTime for FileCheck with " + str(option) + " options is " + "\n\\")
print(str(sum(runTime) / loopNum) +"\n\\")
file.close()
Dual OS on a 2015 MacBook pro
As the costs of Apple computers continue to skyrocket and the price of useable amounts of storage zoom past a neighboring galaxy (for a college student at least), I am always on on the hunt for cost effective solutions to house and process big projects and large data.
Pop OS (a neatly wrapped Ubuntu) is the in-house OS from System76. After looking through their catalog of incredible computers and servers, I thought it would be a good time to see how far I can go with an Ubuntu daily driver. Of course, there are many major and do-not-pass-go downsides- see the below list:
- Logic Pro X → There is no replacement 🙁 A killer DAW with fantastic AU libraries. I am versed with Reaper and Bitwig, but neither is as complete as Logic Pro. I will be evaluating POP with an installation of Reaper, but with so few plugins (I own very few third party sets) this is not a fair replacement.
- Adobe PS and LR: I do not like Adobe, but these programs are… …kind of crucial for most project of mine that involve 2d, raster graphics. I continue to use Inkscape for many tasks, but it is irrelevant when it comes to pixel-based work and photo management / bulk operations.
- AutoCAD / Fusion 360 / Sketchup: I like FreeCAD a lot, but it is not at all like the other programs. Not worse or better, but these are all very different animals for different uses.
- Apple notes and other apple-y things: OSX is extremely refined. Inter-device solutions are superb. I have gotten myself used to Google Keep, but it is not quite at the in-house Apple level.
- XCode and IOS Simulator environments: I do use Expo, but frankly to make products for Apple you need a Mac.
Dual Boot (OSX and Pop Ubuntu) Installation on a 2015 MBP:
This process is quite simple, and only calls for a small handful of post-installation tweaks. My intent is to create a small sandbox with minimal use of “extras” (no extra boot managers or anything like that)
Steps:
Partition separate “boot”, “home”, and other drives
- I am using a 256gb micro sd partitioned in half for OSX and Pop_OS (Sandisk extreme, “v3” speed rating version card via a BaseQi slot adapter)
Use the partition tool in Mac disk utility. Be sure to set these new partitions as FAT 32- we will be using ext4 and other more linux-y filesystems upon installation, so these need to be as generic as possible.
Get a copy of Pop_OS from System76.
Use Etcher (recommended) or any other image burning tool to create a boot key for Pop.
The USB key only has one small job, in which Pop_os will be burned into a better location in your boot partition made in the previous step. If you are coming from a hackintosh experience, fear not: everything will stay in the Macbook Pro, not extra USB safety dongles or Kexts, or Plist mods…!
BOOT INTO POP_OS:
Restart your computer and hold down the alt-option Key. THIS IS HOW TO SWITCH from Pop_os, OSX, Bootcamp, and anything else you have in there. You should see an “efi” option next to the default OSX. (note- at least in my case, the built-in bootloader defaults to the last used OS at each restart.)
Once you are in the Pop_OS installer, click through and select the appropriate partitions when prompted. After this installation, you may remove the USB key and continue to select
“efi” in the bootloader.
ASSUMING ALL GOES WELL:
You are now in Pop_OS! Using the alt/option key will become second nature… but some Pop key mappings may not. Continue for a list of Macbook Pro – specific tweaks and notes.
First moves:
Go to the Pop Shop and get the “Tweaks” tool. I made one or two small keymap changes, but this is likely personal preference.
Default, important Key Mappings:
Command will act as a “control center-ish” thing. It will not copy or paste anything for you.
Control does what Command did on OSX.
Terminal uses Control+Shift for copy and paste, but only in Terminal: if you pull a Control+Shift+C in Chrome, you will get the Dev tool GUI… The Shift key thing is needed unless you are inclined to root around and change it.
Custom Boot Scripts and Services:
In an effort to make things simple, I made a shell script to house the processes I want running when I turn on the computer- this is to streamline the “.service” making process. While it may only take marginally more time to make a new service, this way I can keep track of what is doing what from a file in my documents folder.
In terminal, go to where your services live if you want to look:
cd /etc/systemd/system
Or, cut to the chase:
sudo nano /etc/systemd/system/startsh.sh.service
Paste the following into this new file:
_____________Begin _After_This_Line____________________
[Unit]
Description=Start at Open plz
[Service]
ExecStart=/Documents/startsh.sh
[Install]
WantedBy=multi-user.target
_____________End _Above_This_Line____________________
Exit nano (saving as you go) and cd back to “/”.
cd
sudo nano /Documents/startsh.sh
Paste the following (and any scripts you may want, see the one I have commented out for odrive CLI) into this new file:
_____________Begin _After_This_Line____________________
#!/bin/bash
# Uncomment the following if you want 24/7 odrive in your system
# otherwise do whatever you want
#nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &
# end
_____________End _Above_This_Line____________________
After exiting the shell script, start it all up with the following:
sudo systemctl start startsh.sh
sudo systemctl enable startsh.sh
Cloud file management with Odrive CLI and Odrive Utilities:
Visit one of the two Odrive CLI pages- this one has linux in it:
Please visit this repo to get going with –recursive and other odrive utilities
https://github.com/amagliul/odrive-utilities
These are the two commands I ended up putting in a markdown file on my desktop for easy access. Nope, not nearly as cool as it is on OSX. But it works…
Odrive sync: [-h] for help
“`
python “$HOME/.odrive-agent/bin/odrive.py” sync
“`
Odrive utilities:
“`
python “$HOME/odrive-utilities/odrivecli.py” sync –recursive
“`
Next, Get Some Apps:
Download Chrome. Sign into Chrome to get your chrome OS apps loaded into the launcher- in my case, I needed Chrome remote desktop. DO NOT DOWNLOAD ADDITIONAL PACKAGES for Chrome Remote Desktop, if that is your thing. They will halt all system tools (disk utils, Gnome terminal, graphical file viewer… !!See this thread, it happened to me!! )
Stock up!
Get Atom editor: https://atom.io/
…Or my favorites: https://www.jetbrains.com/toolbox/app/
Rstudio: https://www.rstudio.com/products/rstudio/download/#download
Mysql: https://dev.mysql.com/downloads/mysql/
MySQL Workbench: https://dev.mysql.com/downloads/workbench/
If you get stuck: make sure you have tried installing as root ($ sudo su -) and verified passwords with ($ sudo mysql_secure_installation)
See here to start “rooting around” MySQL issues: https://stackoverflow.com/questions/50132282/problems-installing-mysql-in-ubuntu-18-04/50746032#50746032
Get some GIS tools:
QGIS!
sudo apt-get install qgis python-qgis qgis-plugin-grass
uGet for bulk USGS data download!
sudo add-apt-repository ppa:plushuang-tw/uget-stable
sudo apt install uget
That’s all for now- Cheers!
-Jess
Find the tools in action on Heroku as a node.js app!
https://kml-tools.herokuapp.com/
See the code on GitHub:
https://github.com/Jesssullivan/Shiny-Apps
After many iterations of ideas regarding deployment for a few research Shiny R apps, I am glad to say the current web-only setup is 100% free and simple to adapt. I thought I’d go through some of the Node.JS bits I have been fussing with.
The Current one:
Heroku has a free tier for node.js apps. See the pricing and limitations here: https://www.heroku.com/pricing as far as I can tell, there is little reason to read too far into a free plan; they don’t have my credit card, and thy seem to convert enough folks to paid customers to be nice enough to offer a free something to everyone.
Shiny apps- https://www.shinyapps.io/– works straight from RStudio. They have a free plan. Similar to Heroku, I can care too much about limitations as it is completely free.
The reasons to use Node.JS (even if it just a jade/html wrapper) are numerous, though may not be completely obvious. If nothing else, Heroku will serve it for free….
Using node is nice because you get all the web-layout-ux-ui stacks of stuff if you need them. Clearly, I have not gone to many lengths to do that, but it is there.
Another big one is using node.js with Electron. https://electronjs.org/ The idea is a desktop app framework serves up your node app to itself, via the chromium. I had a bit of a foray with Electron- the node execa npm install execa
package let me launch a shiny server from electron, wait a moment, then load a node/browser app that acts as a interface to the shiny process. While this mostly worked, it is definitely overkill for my shiny stuff. Good to have as a tool though.
-Jess
As many may intuit, I like the AWS ecosystem; it is easy to navigate and usually just works.
…However- more than 1000 dollars later, I no longer use AWS for most things….
🙁
My goals:
Selective sync: I need a unsync function for projects and files due to the tiny 256 SSD on my laptop (odrive is great, just not perfect for cloud computing.
Shared file system: access files from Windows and OSX, locally and remote
Server must be headless, rebootable, and work remotely from under a heavy enterprise NAT (College)
Needs more than 8gb ram
Runs windows desktop remotely for gis applications, (OSX on my laptop)
Have as much shared file space as possible: 12TB+
Server: recycled, remote, works-under-enterprise-NAT:
Recycled Dell 3010 with i5: https://www.plymouth.edu/webapp/itsurplus/
– Cost: $75 (+ ~$200 in windows 10 pro, inevitable license expense)
+ free spare 16gb ram laying around, local SSD and 2TB HDD upgrades
– Does Microsoft-specific GIS bidding, can leave running without hampering productivity
Resilio (bittorrent) Selective sync: https://www.resilio.com/individuals/
– Cost: $60
– p2p Data management for remote storage + desktop
– Manages school NAT and port restrictions well (remote access via relay server)
Drobo 5c:
Attached and syncs to 10TB additional drobo raid storage, repurposed for NTFS
- Instead of EBS (or S3)
What I see: front end-
Jump VNC Fluid service: https://jumpdesktop.com/
– Cost: ~$30
– Super efficient Fluid protocol, clients include chrome OS and IOS, (with mouse support!)
– Manages heavy NAT and port restrictions well
– GUI for everything, no tunneling around a CLI
- Instead of Workspaces, EC2
Jetbrains development suite: https://www.jetbrains.com/ (OSX)
– Cost: FREE as a verified GitHub student user.
– PyCharm IDE, Webstorm IDE
- Instead of Cloud 9
Total (extra) spent: ~$165
(Example: my AWS bill for only October was $262)
-Jess
https://en.wikipedia.org/wiki/GeoJSON
https://gis.stackexchange.com/questions/92885/ogr2ogr-converting-kml-to-geojson
If you happened to be working with…. KML data (or any data with large description strings) and transitioning it into the ESRI Story Map toolset, there is a very good chance you hit the the dBase 254 character length limit with the ESRI Shapefile upload. Shapefiles are always a terrible idea.
the solution: with GDAL or QGIS (alright, even in ArcMap), one can use GeoJSON as an output format AND import into the story map system- with complete long description strings!
QGIS:
Merge vector layers -> save to file -> GeoJSON
arcpy:
import arcpy
import os
arcpy.env.workspace = “/desktop/arcmapstuff”
arcpy.FeaturesToJSON_conversion(os.path.join(“outgdb.gdb”, “myfeatures”), “output.json”)
GDAL:
<
ogr2ogr -f GeoJSON output.json input.kml
View the tools here: http://kml.jessdev.org
Three of my KML tools are now stable and in Github. These are actually displayed via the static site generator Hugo (read about the Hugo CLI here), which is sitting in the shiny server (port 3838) next to the apps. Messy, but it will do for now.
https://github.com/Jesssullivan/Shiny-Apps
-Jess
New Shiny App specific Repo now live…
https://github.com/Jesssullivan/Shiny-Apps
With KML Search and Convert now fully functional (along with the tiny app “clean”) , live shiny apps of mine now have a repo of their own. Check it out!
-Jess
Written in R; using GDAL/EXPAT libraries on Ubuntu and hosted with AWS EC2.
New App: KML Search and Convert
Here is an simple (beta) app of mine that converts KML files into Excel-friendly CSV documents. It also has a search function, so you can download a subset of data that contains keywords. 🙂
The files will soon be available in Github.
I’m still working on a progress indicator; it currently lets you download before it is done processing. Know a completely processed file is titled with “kml2csv_<yourfile>.csv”.
…YMMV. xD