View below the readme mirror from my Github repo. Scroll down for my Python3 evaluation script.
….Or visit the page directly: https://github.com/Jesssullivan/ChapelTests
[github_readme repo=”Jesssullivan/ChapelTests”]
Now Some Python3 Evaluation:
# Ajacent to compiled FileCheck.chpl binary:
python3 Timer_FileCheck.py
Timer_FileCheck.py will loop FileCheck and find the average times it takes to complete, with a variety of additional arguments to toggle parallel and serial operation. The iterations are:
ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
-
Default – full parallel
-
Serial evaluation (–SE) but parallel domain creation
-
Serial domain creation (–SP) but parallel evaluation
-
Full serial (–SE –SP)
Output is saved as Time_FileCheck_Results.txt
- Output is also logged after each of the (default 10) loops.
The idea is to evaluate a “–flag” -in this case, Serial or Parallel in FileCheck.chpl- to see of there are time benefits to parallel processing. In this case, there really are not any, because that program relies mostly on disk speed.
Evaluation Test:
# Time_FileCheck.py
#
# A WIP by Jess Sullivan
#
# evaluate average run speed of both serial and parallel versions
# of FileCheck.chpl -- NOTE: coforall is used in both BY DEFAULT.
# This is to bypass the slow findfiles() method by dividing file searches
# by number of directories.
import subprocess
import time
File = "./FileCheck" # chapel to run
# default false, use for evaluation
SE = "--SE=true"
# default false, use for evaluation
SP = "--SP=true" # no coforall looping anywhere
# default true, make it false:
R = "--R=false" # do not let chapel compile a report per run
# default true, make it false:
T = "--T=false" # no internal chapel timers
# default true, make it false:
V = "--V=false" # use verbose logging?
# default is false
bug = "--debug=false"
Default = (File, R, T, V, bug) # default parallel operation
Serial_SE = (File, R, T, V, bug, SE)
Serial_SP = (File, R, T, V, bug, SP)
Serial_SE_SP = (File, R, T, V, bug, SP, SE)
ListOptions = [Default, Serial_SE, Serial_SP, Serial_SE_SP]
loopNum = 10 # iterations of each runTime for an average speed.
# setup output file
file = open("Time_FileCheck_Results.txt", "w")
file.write(str('eval ' + str(loopNum) + ' loops for ' + str(len(ListOptions)) + ' FileCheck Options' + "\n\\"))
def iterateWithArgs(loops, args, runTime):
for l in range(loops):
start = time.time()
subprocess.run(args)
end = time.time()
runTime.append(end-start)
for option in ListOptions:
runTime = []
iterateWithArgs(loopNum, option, runTime)
file.write("average runTime for FileCheck with "+ str(option) + "options is " + "\n\\")
file.write(str(sum(runTime) / loopNum) +"\n\\")
print("average runTime for FileCheck with " + str(option) + " options is " + "\n\\")
print(str(sum(runTime) / loopNum) +"\n\\")
file.close()
Leave a Reply