Trans Scend Survival

Trans: Latin prefix implying "across" or "Beyond", often used in gender nonconforming situations – Scend: Archaic word describing a strong "surge" or "wave", originating with 15th century english sailors – Survival: 15th century english compound word describing an existence only worth transcending.

Page 4 of 10

Summer 2019 Update!

GIS Updates:

Newish Raster / DEM image → STL tool in the Shiny-Apps repo:

https://github.com/Jesssullivan/Shiny-Apps

See the (non-load balanced!) live example on the Heroku page:

https://kml-tools.herokuapp.com/

Summarized for a forum member here too:  https://www.v1engineering.com/forum/topic/3d-printing-tactile-maps/

CAD / CAM Updates:

Been revamping my CNC thoughts- 

Basically, the next move is a complete rebuild (primarily for 6061 aluminum).

I am aiming for:

  • Marlin 2.x.x around either a full-Rambo or 32 bit Archim 1.0 (https://ultimachine.com/
  • Dual endstop configuration, CNC only (no hotend support)
  • 500mm2 work area / swappable spoiler boards (~700mm exterior MPCNC conduit length)
  • Continuous compressed air chip clearing, shop vac / cyclone chip removal
  • Two chamber, full acoustic enclosure (cutting space + air I/O for vac and compressor)
  • Full octoprint networking via GPIO relays

FWIW: Sketchup MPCNC:

https://3dwarehouse.sketchup.com/model/72bbe55e-8df7-42a2-9a57-c355debf1447/MPCNC-CNC-Machine-34-EMT

Also TinkerCAD version:

https://www.tinkercad.com/things/fnlgMUy4c3i

Electric Drivetrain Development:

BORGI / Axial Flux stuff:

https://community.occupycars.com/t/borgi-build-instructions/37

Designed some rough coil winders for motor design here:

https://community.occupycars.com/t/arduino-coil-winder/99

Repo:  https://github.com/Jesssullivan/Arduino_Coil_Winder

Also, an itty-bitty, skate bearing-scale axial flux / 3-phase motor to hack upon:

https://www.tinkercad.com/things/cTpgpcNqJaB


Cheers-

– Jess

Warbler Trillers of the Charles

The first ones to arrive in MA, brush up!

 

Palm warbler

songs

 

This is usually the first one to arrive.  Gold bird, medium sized warbler, rufus hat.  When they arrive in MA they are often found lower than usual / on the ground looking for anything they can munch on.  Song is a rapid trill. More “musical / pleasant” than a fast chipping sparrow, faster than many Junco trills.

 

Pine warbler

songs

 

Slimmer than palm, no hat, very slim beak, has streaks on the breast usually.  Also a triller. They remain higher in the trees on arrival.

 

Yellow-rumped warbler

songs

 

Spectacular bird, if it has arrived you can’t miss it- also they will arrive by the dozen so worth waiting for a good visual.  These also trill, which is another reason it is good to get a visual. The trill is slow, very “sing-song”, and has a downward inflection at the end.  If there are a bunch sticking around for the summer, try to watch some sing- soon enough you will be able to pick out this trill from the others.

 

Yellow warbler says “sweet sweet sweet, I’m so Sweet!” and can get a bit confusing with Yellow-rumped warbler

Chestnut-sided warbler says “very very pleased to meet ya!” and can get a bit confusing with Yellow warbler

 

Black-and-white warbler

songs

 

Looks like a zebra – always acts like a nuthatch (clings to trunk and branches).  This one trills like a rusty wheel. It can easily be distinguished after a bit of birding with some around.  

 

American Redstart

songs

 

Adult males look like a late 50’s hot-rodded American muscle car: long, low, two tone paint job.  Matte/luster black with flame accents. Can’t miss it. The females and young males are buff (chrome, to keep in style I guess) with yellow accents.  Look for behavior- if a “female” is getting beaten up while trying to sing a song in the same area, it is actually a first year male failing to establish a territory due to obviously being a youth.

 

Cheers,

– Jess

Notes on a Free and Open Source Notes App:  Joplin

Joplin for all your Operating Systems and devices

As a lifelong IOS + OSX user (Apple products), I have used many, many notes apps over the years.  From big name apps like OmniFocus, Things 3, Notes+, to all the usual suspects like Trello, Notability, Notemaster, RTM, and others, I always eventually migrate back to Apple notes, simply because it is always available and always up to date.  There are zero “features” besides this convenience, which is why I am perpetually willing to give a new app a spin.

Joplin is free, open source, and works on OSX, Windows, Linux operating systems and IOS and Android phones.  

Find it here:

https://joplin.cozic.net/

brew install joplin 

The most important thing this project has nailed is cloud support and syncing.  I have my iPhone and computers syncing via Dropbox, which is easy to setup and works….  really well. Joplin folks have added many cloud options, so this is unlikely to be a sticking point for users.

Here are some of the key features:

  • Markdown is totally supported for straightforward and easy formatting
  • External editor support for emacs / atom / etc folks
  • Layout is clean, uncluttered, and just makes sense
  • Built-in markdown text editor and viewer is great
  • Notebook, todo, note, and tags work great across platforms
  • Browser integration, E2EE security, file attachments, and geolocation included

Hopefully this will be helpful.

Cheers,

– Jess

Mac OSX: Fixing GPT and PMBR Tables

My computer recently crashed very, very hard, while I was removing an small empty alternative OS partition I no longer needed.  This is a fairly mundane operation that I do now and again, and is a ongoing fight to keep at least a few gigs of space free for actual work on precious 250gb Mac SSD.  

The crash results?  Toasted GPT tables all around.   My 2015 computer’s next move was to reboot- only to find essentially no partitions of memory… at all.  What it did show was (wait for it) Clover bootloader of all things, with a single windows boot camp icon (nothing in there either).  That is so wrong…. On all levels!

I brought the machine to the local university repair.  They declared this machine bricked and offered to wipe it.  Back to me it came…

I scheduled an Apple support session with a phone rep, which after around 45 minutes of actually productive troubleshooting ideas (none helping though) was forwarded to a senior supervisor.  She was interested in this problem, and we scheduled a larger block of time. But, in the meantime, I still wanted to try again….

How to recover a garbled GPT table for Mac OSX:

Start with clean SMC and PRAM / NVRAM.

Clearing these actually made accessing internet recovery (how we get to a stand-in OS with a terminal) dozens of times faster.  2.5 hours to 7 minutes. I actually waited 2.5 hours twice on separate attempts before I cleared these.

Follow these Apple links to perform these operations:

https://support.apple.com/en-us/HT204063

https://support.apple.com/en-us/HT201295

Get the computer with a text editor open.

Restart the computer into internet recovery.  Command + R or Command + Shift + R.

Wait.

Open a Terminal.  The graphical disk utility is useless because the disk / partition we want is unreachable(so it will say everything is great).

Run:

diskutil list

For me, I see disk0s2 is 180.6 gb.  That’s my stuff!

I also found /dev/disk2 → /dev/disk14 to be tiny partitions- don’t worry about those.

The syntax you are looking for is:

Name: “untitled” Identifier: disk#

(NOT disk#s#)

Write down ALL of the above information for the disk you are after.  That is probably disk0.

Then:

gpt -r show disk0

Copy the following readout in your terminal for all entries bigger than “32”.  The critical fields here are Start, Size, Index, and Contents. Each field is supremely important.

Here is mine (formatted for web):

# Disk0, with contents > “32” :

# First Table:

Start: 40  

Size: 409600

Index:  1

Contents: C12A7328-F81F-11D2-BA4B-00A0C93EC93B

# Second table, the one with my data:

Start: 409640

Size: 352637568

Index: 2

Contents: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF

Note, this is the initial Contents.  I rewrote this once with the correct Apple Index 2 data but did not create a new table (leaving the rest of the broken bits broken).  We are replacing / destroying a table here, but not the data.     

Actions:

# unmount the disk.  From here we are doing tables, not disks / data.

diskutil unmountDisk disk0

# Get rid of the GPT on the disk we are recovering.  We are not touching the data.

gpt destroy disk0

# Make a new one to start with some fresh values.

gpt create -f disk0

# perform magic trick

# USE THE DATA YOU WROTE DOWN FROM “gpt -r show disk0”.  THIS IS IMPORTANT.

# we must add that first small partition at index 1.  Verbatim.

gpt add -i 1 -b 40 -s 409600 -t C12A7328-F81F-11D2-BA4B-00A0C93EC93B disk0

# index two (for me) is my data.  We are going to use the default OSX / Mac HD partition values.

# the Length of “372637568” is not as sure fire as the GPT Contents.  

# YMMV, but YOLO.

gpt add -i 2 -b 409640 -s 372637568 -t 7C3457EF-0000-11AA-AA11-00306543ECAC disk0

Again, that Contents value is 7C3457EF-0000-11AA-AA11-00306543ECAC.

– Jess

written in the recovered computer xD

Musings On Chapel Language and Parallel Processing

View below the readme mirror from my Github repo. Scroll down for my Python3 evaluation script.

….Or visit the page directly: https://github.com/Jesssullivan/ChapelTests 

[github_readme repo=”Jesssullivan/ChapelTests”]

Now Some Python3 Evaluation:

# Ajacent to compiled FileCheck.chpl binary:

python3 Timer_FileCheck.py

Timer_FileCheck.py will loop FileCheck and find the average times it takes to complete, with a variety of additional arguments to toggle parallel and serial operation. The iterations are:

ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
  • Default – full parallel

  • Serial evaluation (–SE) but parallel domain creation

  • Serial domain creation (–SP) but parallel evaluation

  • Full serial (–SE –SP)

Output is saved as Time_FileCheck_Results.txt

  • Output is also logged after each of the (default 10) loops.

The idea is to evaluate a “–flag” -in this case, Serial or Parallel in FileCheck.chpl- to see of there are time benefits to parallel processing. In this case, there really are not any, because that program relies mostly on disk speed.

Evaluation Test:

# Time_FileCheck.py
#
# A WIP by Jess Sullivan
#
# evaluate average run speed of both serial and parallel versions
# of FileCheck.chpl  --  NOTE: coforall is used in both BY DEFAULT.
# This is to bypass the slow findfiles() method by dividing file searches
# by number of directories.

import subprocess
import time

File = "./FileCheck" # chapel to run

# default false, use for evaluation
SE = "--SE=true"

# default false, use for evaluation
SP = "--SP=true" # no coforall looping anywhere

# default true, make it false:
R = "--R=false"  #  do not let chapel compile a report per run

# default true, make it false:
T = "--T=false" # no internal chapel timers

# default true, make it false:
V = "--V=false"  #  use verbose logging?

# default is false
bug = "--debug=false"

Default = (File, R, T, V, bug) # default parallel operation
Serial_SE = (File, R, T, V, bug, SE)
Serial_SP = (File, R, T, V, bug, SP)
Serial_SE_SP = (File, R, T, V, bug, SP, SE)


ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]

loopNum = 10 # iterations of each runTime for an average speed.

# setup output file
file = open("Time_FileCheck_Results.txt", "w")

file.write(str('eval ' + str(loopNum) + ' loops for ' + str(len(ListOptions)) + ' FileCheck Options' + "\n\\"))

def iterateWithArgs(loops, args, runTime):
    for l in range(loops):
        start = time.time()
        subprocess.run(args)
        end = time.time()
        runTime.append(end-start)

for option in ListOptions:
    runTime = []
    iterateWithArgs(loopNum, option, runTime)
    file.write("average runTime for FileCheck with "+ str(option) + "options is " + "\n\\")
    file.write(str(sum(runTime) / loopNum) +"\n\\")
    print("average runTime for FileCheck with " + str(option) + " options is " + "\n\\")
    print(str(sum(runTime) / loopNum) +"\n\\")

file.close()

Evaluating Ubuntu Pop OS: Dual Boot Setup

Dual OS on a 2015 MacBook pro

As the costs of Apple computers continue to skyrocket and the price of useable amounts of storage zoom past a neighboring galaxy (for a college student at least), I am always on on the hunt for cost effective solutions to house and process big projects and large data.

Pop OS (a neatly wrapped Ubuntu) is the in-house OS from System76.  After looking through their catalog of incredible computers and servers, I thought it would be a good time to see how far I can go with an Ubuntu daily driver.  Of course, there are many major and do-not-pass-go downsides- see the below list:

  • Logic Pro X → There is no replacement 🙁   A killer DAW with fantastic AU libraries. I am versed with Reaper and Bitwig, but neither is as complete as Logic Pro.  I will be evaluating POP with an installation of Reaper, but with so few plugins (I own very few third party sets) this is not a fair replacement.
  • Adobe PS and LR:  I do not like Adobe, but these programs are… …kind of crucial for most project of mine that involve 2d, raster graphics.  I continue to use Inkscape for many tasks, but it is irrelevant when it comes to pixel-based work and photo management / bulk operations.
  • AutoCAD / Fusion 360 / Sketchup:  I like FreeCAD a lot, but it is not at all like the other programs.  Not worse or better, but these are all very different animals for different uses.
  • Apple notes and other apple-y things:  OSX is extremely refined. Inter-device solutions are superb.  I have gotten myself used to Google Keep, but it is not quite at the in-house Apple level.
  • XCode and IOS Simulator environments:  I do use Expo, but frankly to make products for Apple you need a Mac.

Dual Boot (OSX and Pop Ubuntu) Installation on a 2015 MBP:

This process is quite simple, and only calls for a small handful of post-installation tweaks.  My intent is to create a small sandbox with minimal use of “extras” (no extra boot managers or anything like that)

Steps:

Partition separate “boot”, “home”, and other drives

  • I am using a 256gb micro sd partitioned in half for OSX and Pop_OS (Sandisk extreme, “v3” speed rating version card via a BaseQi slot adapter)

Use the partition tool in Mac disk utility.  Be sure to set these new partitions as FAT 32- we will be using ext4 and other more linux-y filesystems upon installation, so these need to be as generic as possible.

Get a copy of Pop_OS from System76.

Use Etcher (recommended) or any other image burning tool to create a boot key for Pop.  

The USB key only has one small job, in which Pop_os will be burned into a better location in your boot partition made in the previous step.  If you are coming from a hackintosh experience, fear not: everything will stay in the Macbook Pro, not extra USB safety dongles or Kexts, or Plist mods…!

BOOT INTO POP_OS:

Restart your computer and hold down the alt-option Key.  THIS IS HOW TO SWITCH from Pop_os, OSX, Bootcamp, and anything else you have in there.  You should see an “efi” option next to the default OSX. (note- at least in my case, the built-in bootloader defaults to the last used OS at each restart.)

Once you are in the Pop_OS installer, click through and select the appropriate partitions when prompted.  After this installation, you may remove the USB key and continue to select
“efi” in the bootloader.


ASSUMING ALL GOES WELL:

You are now in Pop_OS!  Using the alt/option key will become second nature… but some Pop key mappings may not.  Continue for a list of Macbook Pro – specific tweaks and notes.

First moves:

Go to the Pop Shop and get the “Tweaks” tool.  I made one or two small keymap changes, but this is likely personal preference.  

Default, important Key Mappings:

Command will act as a “control center-ish” thing.  It will not copy or paste anything for you.

Control does what Command did on OSX.  

Terminal uses Control+Shift for copy and paste, but only in Terminal:  if you pull a Control+Shift+C in Chrome, you will get the Dev tool GUI…  The Shift key thing is needed unless you are inclined to root around and change it.

Custom Boot Scripts and Services:

In an effort to make things simple, I made a shell script to house the processes I want running when I turn on the computer- this is to streamline the “.service” making process.  While it may only take marginally more time to make a new service, this way I can keep track of what is doing what from a file in my documents folder.

In terminal, go to where your services live if you want to look:

cd /etc/systemd/system

Or, cut to the chase:

sudo nano /etc/systemd/system/startsh.sh.service

Paste the following into this new file:

_____________Begin _After_This_Line____________________

[Unit]

Description=Start at Open plz

[Service]

ExecStart=/Documents/startsh.sh

[Install]

WantedBy=multi-user.target

_____________End _Above_This_Line____________________

Exit nano (saving as you go) and cd back to “/”.

cd

sudo nano /Documents/startsh.sh

Paste the following (and any scripts you may want, see the one I have commented out for odrive CLI) into this new file:

_____________Begin _After_This_Line____________________

#!/bin/bash

# Uncomment the following if you want 24/7 odrive in your system

# otherwise do whatever you want

#nohup “$HOME/.odrive-agent/bin/odriveagent” > /dev/null 2>&1 &

# end

_____________End _Above_This_Line____________________

After exiting the shell script, start it all up with the following:

sudo systemctl start startsh.sh

sudo systemctl enable startsh.sh

Cloud file management with Odrive CLI and Odrive Utilities:

Visit one of the two Odrive CLI pages- this one has linux in it:

https://forum.odrive.com/t/odrive-sync-agent-a-cli-scriptable-interface-for-odrives-progressive-sync-engine-for-linux-os-x-and-windows/499#linuxinst

Please visit this repo to get going with –recursive and other odrive utilities

https://github.com/amagliul/odrive-utilities


These are the two commands I ended up putting in a markdown file on my desktop for easy access.  Nope, not nearly as cool as it is on OSX. But it works…

Odrive sync: [-h] for help

“`

python “$HOME/.odrive-agent/bin/odrive.py” sync

“`

Odrive utilities:

“`

python “$HOME/odrive-utilities/odrivecli.py” sync –recursive

“`

Next, Get Some Apps:

Download Chrome.  Sign into Chrome to get your chrome OS apps loaded into the launcher- in my case, I needed Chrome remote desktop.  DO NOT DOWNLOAD ADDITIONAL PACKAGES for Chrome Remote Desktop, if that is your thing. They will halt all system tools (disk utils, Gnome terminal, graphical file viewer…   !!See this thread, it happened to me!! )

Stock up!  

Get Atom editor:  https://atom.io/

…Or my favorites: https://www.jetbrains.com/toolbox/app/

Rstudio:  https://www.rstudio.com/products/rstudio/download/#download

Mysql:  https://dev.mysql.com/downloads/mysql/

MySQL Workbench:  https://dev.mysql.com/downloads/workbench/

If you get stuck:  make sure you have tried installing as root ($ sudo su -) and verified passwords with ($ sudo mysql_secure_installation)  

See here to start “rooting around” MySQL issues:  https://stackoverflow.com/questions/50132282/problems-installing-mysql-in-ubuntu-18-04/50746032#50746032

Get some GIS tools:

QGIS!

sudo apt-get install qgis python-qgis qgis-plugin-grass

uGet for bulk USGS data download!

sudo add-apt-repository ppa:plushuang-tw/uget-stable

sudo apt install uget

That’s all for now- Cheers!

-Jess

Deploy Shiny R apps along Node.JS

Find the tools in action on Heroku as a node.js app!

https://kml-tools.herokuapp.com/

See the code on GitHub:

https://github.com/Jesssullivan/Shiny-Apps

After many iterations of ideas regarding deployment for a few research Shiny R apps, I am glad to say the current web-only setup is 100% free and simple to adapt.   I thought I’d go through some of the Node.JS bits I have been fussing with. 

The Current one:  

Heroku has a free tier for node.js apps.  See the pricing and limitations here: https://www.heroku.com/pricing as far as I can tell, there is little reason to read too far into a free plan; they don’t have my credit card, and thy seem to convert enough folks to paid customers to be nice enough to offer a free something to everyone.  

Shiny apps- https://www.shinyapps.io/– works straight from RStudio.  They have a free plan. Similar to Heroku, I can care too much about limitations as it is completely free.  

The reasons to use Node.JS (even if it just a jade/html wrapper) are numerous, though may not be completely obvious.  If nothing else, Heroku will serve it for free….

Using node is nice because you get all the web-layout-ux-ui stacks of stuff if you need them.  Clearly, I have not gone to many lengths to do that, but it is there.

Another big one is using node.js with Electron.  https://electronjs.org/ The idea is a desktop app framework serves up your node app to itself, via the chromium.  I had a bit of a foray with Electron- the node execa npm install execa package let me launch a shiny server from electron, wait a moment, then load a node/browser app that acts as a interface to the shiny process.  While this mostly worked, it is definitely overkill for my shiny stuff.  Good to have as a tool though.

-Jess

Recycled Personal “Cloud Computing” under NAT

As many may intuit, I like the AWS ecosystem; it is easy to navigate and usually just works.  

…However- more than 1000 dollars later, I no longer use AWS for most things….

🙁   

My goals: 

Selective sync:  I need a unsync function for projects and files due to the tiny 256 SSD on my laptop (odrive is great, just not perfect for cloud computing.

Shared file system:  access files from Windows and OSX, locally and remote

Server must be headless, rebootable, and work remotely from under a heavy enterprise NAT (College)

Needs more than 8gb ram

Runs windows desktop remotely for gis applications, (OSX on my laptop)

 

Have as much shared file space as possible: 12TB+

 

Server:  recycled, remote, works-under-enterprise-NAT:

Recycled Dell 3010 with i5: https://www.plymouth.edu/webapp/itsurplus/

– Cost: $75 (+ ~$200 in windows 10 pro, inevitable license expense) 

free spare 16gb ram laying around, local SSD and 2TB HDD upgrades

– Does Microsoft-specific GIS bidding, can leave running without hampering productivity

Resilio (bittorrent) Selective sync: https://www.resilio.com/individuals/

– Cost: $60

– p2p Data management for remote storage + desktop

– Manages school NAT and port restrictions well (remote access via relay server)

Drobo 5c:

Attached and syncs to 10TB additional drobo raid storage, repurposed for NTFS

  • Instead of EBS (or S3)

 

What I see:  front end-

Jump VNC Fluid service: https://jumpdesktop.com/

– Cost: ~$30

– Super efficient Fluid protocol, clients include chrome OS and IOS,  (with mouse support!)

– Manages heavy NAT and port restrictions well

– GUI for everything, no tunneling around a CLI

  • Instead of Workspaces, EC2

Jetbrains development suite:  https://www.jetbrains.com/ (OSX)

– Cost:  FREE as a verified GitHub student user.

– PyCharm IDE, Webstorm IDE

  • Instead of Cloud 9

 

Total (extra) spent: ~$165

(Example:  my AWS bill for only October was $262)

 

-Jess

Quick fix: 254 character limit in ESRI Story Map?

https://gis.stackexchange.com/questions/75092/maximum-length-of-text-fields-in-shapefile-and-geodatabase-formats

https://en.wikipedia.org/wiki/GeoJSON

https://gis.stackexchange.com/questions/92885/ogr2ogr-converting-kml-to-geojson

If you happened to be working with….  KML data (or any data with large description strings) and transitioning it into the ESRI Story Map toolset, there is a very good chance  you hit the the dBase 254 character length limit with the ESRI Shapefile upload.  Shapefiles are always a terrible idea.

 

the solution:  with GDAL or QGIS (alright, even in ArcMap), one can use GeoJSON as an output format AND import into the story map system- with complete long description strings!

 

QGIS:

Merge vector layers -> save to file -> GeoJSON

arcpy:
import arcpy

import os

arcpy.env.workspace = “/desktop/arcmapstuff”

arcpy.FeaturesToJSON_conversion(os.path.join(“outgdb.gdb”, “myfeatures”), “output.json”)

GDAL:
<
ogr2ogr -f GeoJSON output.json input.kml

New App:  KML Search and Convert

Written in R; using GDAL/EXPAT libraries on Ubuntu and hosted with AWS EC2.

New App:  KML Search and Convert

Here is an simple (beta) app of mine that converts KML files into Excel-friendly CSV documents.  It also has a search function, so you can download a subset of data that contains keywords.   🙂

The files will soon be available in Github.

I’m still working on a progress indicator; it currently lets you download before it is done processing.   Know a completely processed file is titled with “kml2csv_<yourfile>.csv”.

…YMMV.  xD

GDAL for R Server on Ubuntu – KML Spatial Libraries and More

GDAL for R Server on Red Hat Xenial Ubuntu – KML Spatial Libraries and More

If you made the (possible mistake) of running with a barebones Red Hat Linux instance, you will find it is missing many things you may want in R.   I rely on GDAL (the definitive Geospatial Data Abstraction Library) on my local OSX R setup, and want it on my server too.  GDAL contains many libraries you need to work with KML, RGDAL, and other spatial packages.  It is massive and usually take a long time to sort out on any machine.

These notes assume you are already involved with a R server (usually port 8787 in a browser).  I am running mine from an EC2 instance with AWS.

! Note this is a fresh server install, using Ubuntu; I messed up my original ones while trying to configure GDAL against conflicting packages. If you are creating a new one, opt for at least a T2 medium (or go bigger) and find the latest Ubuntu server AMI.  For these instructions, you want an OS that is as generic as possible.

On Github:

https://github.com/Jesssullivan/rhel-bits

From Bash:

# SSH into the EC2 instance: (here is the syntax just in case)

#ssh -i “/Users/YourSSHKey.pem” ec2-user@yourAWSinstance.amazonaws.com

sudo su –

apt-get update

apt-get upgrade

nano /etc/apt/sources.list

#enter as a new line at the bottom of the doc:

deb https://cloud.r-project.org/bin/linux/ubuntu xenial/

#exit nano

wget https://raw.githubusercontent.com/Jesssullivan/rhel-bits/master/xen-conf.sh

chmod 777 xen-conf.sh

./xen-conf.sh

Or…

From SSH:

# SSH into the EC2 instance: (here is the syntax just in case)

ssh -i “/Users/YourSSHKey.pem” ec2-user@yourAWSinstance.amazonaws.com

# if you can, become root and make some global users- these will be your access to

# RStudio Server and shiny too!

sudo su –

adduser <Jess>

# Follow the following prompts carefully to create the user

apt-get update

nano /etc/apt/sources.list

# enter as a new line at the bottom of the doc:

deb https://cloud.r-project.org/bin/linux/ubuntu xenial/

# exit nano

# Start, or try bash:

apt-get install r-base

apt-get install r-base-dev

apt-get update

apt-get upgrade

wget http://download.osgeo.org/gdal/2.3.1/gdal-2.3.1.tar.gz

tar xvf gdal-2.3.1.tar.gz

cd  gdal-2.3.1

# begin making GDAL: this all takes a while

./configure  [if your need proper kml support (like me), search on configuring with expat or libkml.   There are many more options for configuration based on other packages that can go here, and this is the step to get them in order…]

sudo make

sudo make install

cd # Try entering R now and check the version!

# Start installing RStudio server and Shiny

apt-get update

apt-get upgrade
sudo apt-get install gdebi-core
wget https://download2.rstudio.org/rstudio-server-1.1.456-amd64.deb
sudo gdebi rstudio-server-1.1.456-amd64.deb

# Enter R or go to the graphical R Studio installation in your browser

R

# Authenticate if using the graphical interface using the usr:pwd you defined earlier

# this will take a long time

install.packages(“rgdal”)

# Note any errors carefully!

Then:

install.packages(“dplyr”)

install.packages(c(“data.table”, “tidyverse”, “shiny”)  # etc

Well, there you have it!

-Jess

Extras:

##Later, ONLY IF you NEED Anaconda, FYI:

# Get Anaconda: this is a large package manager, and is could be used for patching up missing # dependencies:

#Use  “ls” followed by rm -r <anaconda> (fill in with ls results) to remove conflicting conda

# installers if you have any issue there, I am starting fresh:

mkdir binconda

# *making a weak attempt at sandboxing the massive new package manager installation*

cd binconda
wget http://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh
# install and follow the prompts
bash Anaconda2-5.2.0-Linux-x86_64.sh

# Close the terminal window completely and start a new one, and ssh back to where you left

# off.  Conda install requires this.

# open and SSH back into your instance.  You should now have either additional flexibility in

# either patching holes in dependencies, or created some large holes in your server.  YMMV.

### Done

Red Hat stuff:

Follow these AWS instructions if you are doing something else:

https://aws.amazon.com/blogs/big-data/running-r-on-aws/

See my notes on this here:

https://www.transscendsurvival.org/2018/03/08/how-to-make-a-aws-r-server/

and notes on Shiny server:

https://www.transscendsurvival.org/2018/07/16/deploy-a-shiny-web-app-in-r-using-aws-ec2-red-hat/

GDAL on Red Hat:- Existing threads on this:

https://gis.stackexchange.com/questions/120101/building-gdal-with-libkml-support/120103#120103

This is a nice short thread about building from source:

https://gis.stackexchange.com/questions/263495/how-to-install-gdal-on-centos-7-4

neat RPM package finding tool, just in case:

https://rpmfind.net/linux/rpm2html/

Info on the LIBKML driver if you end up with issues there:

http://www.gdal.org/drv_libkml.html

 

I hope this is useful- GDAL is important and best to set it up early.  It will be a pain, but so is losing work while trying to patch it in later.  xD

 

-Jess

 

« Older posts Newer posts »

© 2024 Trans Scend Survival

α wιρ Σ ♥ by Jess SullivanUp ↑