initial commit with stuff from arise
Some checks are pending
Deploy Arise to html branch / Deploy Arise (push) Waiting to run
Some checks are pending
Deploy Arise to html branch / Deploy Arise (push) Waiting to run
This commit is contained in:
commit
0faf445b39
62 changed files with 3401 additions and 0 deletions
13
lib/functions/README.md
Normal file
13
lib/functions/README.md
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
# Arise Functions
|
||||
|
||||
Like most larger projects build in Bash, Arise is a modular program split up into functions. The folders in this directory contain the source files for each function defined for use by Arise.
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
Bash is an interesting language because its functions are "fake" in the sense that they are simply a reader-friendly way of performing command grouping operations and nothing more. They are not capable of actually returning data in the way that functions do in most other programming languages. What's more fun is that unless you're careful, modifications to environment variables within a function will carry over to subsequent actions outside of that function due to the dynamic variable scoping in Bash.
|
||||
|
||||
As a result of this limitation, Arise makes use of subshells to work around limitations with variable scoping and shell variables. While this may not be the most efficient workaround, it prevents the dynamic scoping of environment conditions from within subshell functions from contaminating parent functions. Not all functions need this, and thus many functions are simply run inline instead of being routed into a subshell.
|
||||
|
||||
There are two types of functions Arise uses are referred to as **Inline** and **Subshell**. The difference between these two function categories is that **Inline** functions are declared with `{}` and **Subshell** functions are declared with `()`. For organisational purposes, these two respective function categories have their own respective folder for source files
|
||||
|
||||
Upon program run, Arise pulls in all `*.sh` files in both the `inline` and `subshell` directories. As a convention, each function is broken up into its own source file. The convention used for naming source files is `function_name.sh`. Documentation for individual functions and their usage can be found in the comments at the top of each function source file.
|
||||
37
lib/functions/inline/arise_help.sh
Normal file
37
lib/functions/inline/arise_help.sh
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Prints help
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# arise_help
|
||||
|
||||
arise_help() {
|
||||
cat <<EOF
|
||||
Arise, v$arise_version
|
||||
|
||||
Welcome to Arise, a static site generator written in Bash
|
||||
|
||||
Usage:
|
||||
------
|
||||
bash arise build -[k][f]
|
||||
# Builds the entire site
|
||||
available in all build modes:
|
||||
-k: Keeps source files in output
|
||||
-f: Force overwrite pre-existing output
|
||||
bash arise -[p|s|r][k][f]
|
||||
# Builds only specific parts of the site
|
||||
# Useful for testing purposes
|
||||
mutually exclusive options:
|
||||
-p: Build pages only mode
|
||||
-s: Build sitemap only mode
|
||||
-r: Build rss only mode
|
||||
|
||||
------
|
||||
Please visit GitHub for more detailed info:
|
||||
https://github.com/spectrasecure/arise
|
||||
|
||||
EOF
|
||||
}
|
||||
37
lib/functions/inline/arise_logo.sh
Normal file
37
lib/functions/inline/arise_logo.sh
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Displays ASCII art of the Arise logo :)
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# arise_logo
|
||||
|
||||
arise_logo() {
|
||||
cat <<EOF
|
||||
=========================================================
|
||||
=========================================================
|
||||
|
||||
@@@
|
||||
@@#@ @####@@@ ##@ @@@#+@@@@@ @@@@@@@@@@
|
||||
@++@ @+++++++@ ++@ @+++++++@ @++++++++@
|
||||
@+++@ @++@@@*++@ ++@ @++++++++@ @+++++++@
|
||||
@@+*+@ @++@ @@+@ ++@ @++@#+@@@ @@@@@@@@@
|
||||
@++@+@ @++@ @++@ ++@ @++@#+@
|
||||
@++@@+@ @++@ @++@ ++@ @++@#+@@@ @@@@@@@@@
|
||||
@#+#@@+@ @++@ @++@ ++@ *+++++++@ @+++++++@
|
||||
@+++@@+@ @++@++@ ++@ @@+++++++ @+++++++@
|
||||
@++++@@+@ @@@@@++++@ ++@ @@#+@@++@ @@@@@@@@@
|
||||
@#+@@++@+@ @++++++#@ ++@ #+@@++@
|
||||
@++@ @+++@ @+@@@@++@ ++@ @@@#+@@++@ @@@@@@@@@@
|
||||
@++@ @+++@ @+@ @++@ ++@ @++++++++ @++++++++@
|
||||
@*+@@ @++@ @+@ @++@ ++@ ++++++++@ @++++++++@
|
||||
@@@@ @@@ @@@ @@@@ @@@ @@@@#+@@@ @@@@@@@@@@
|
||||
@@@
|
||||
|
||||
=========================================================
|
||||
=========================================================
|
||||
|
||||
EOF
|
||||
}
|
||||
13
lib/functions/inline/build_footer.sh
Normal file
13
lib/functions/inline/build_footer.sh
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Appends the footer/closing tags to a page.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_footer destination.html
|
||||
|
||||
build_footer() {
|
||||
cat $config/footer.html >> $1
|
||||
}
|
||||
38
lib/functions/inline/build_header.sh
Normal file
38
lib/functions/inline/build_header.sh
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Builds the page header
|
||||
#
|
||||
# This function assumes that metadata has already been fetched in the current subshell. If no metadata is present, it will do nothing.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_header destination.html
|
||||
|
||||
build_header() {
|
||||
# Verify that metadata variables are populated before running.
|
||||
[[ $title != '' ]] && {
|
||||
cat $config/header.html > $1
|
||||
|
||||
# If enabled (default:true), add a configurable content header after the metadata header. The purpose of this is to enable a standardised header for stuff like post dates that should be on *most* pages, but can be disabled on pages the user considers special and wants to build out completely on their own.
|
||||
[[ $content_header == "true" ]] && cat $config/content_header.html >> $1
|
||||
|
||||
# Replace all tags in {{this format}} with their value. We do this using Bash pattern replacement.
|
||||
page_contents="$(cat $1)"
|
||||
|
||||
page_contents="${page_contents//\{\{title\}\}/"$title"}"
|
||||
page_contents="${page_contents//\{\{author\}\}/"$author"}"
|
||||
page_contents="${page_contents//\{\{description\}\}/"$description"}"
|
||||
page_contents="${page_contents//\{\{language\}\}/"$language"}"
|
||||
page_contents="${page_contents//\{\{thumbnail\}\}/"$thumbnail"}"
|
||||
page_contents="${page_contents//\{\{published_date\}\}/"$published_date"}"
|
||||
page_contents="${page_contents//\{\{modified_date\}\}/"$modified_date"}"
|
||||
page_contents="${page_contents//\{\{canonical_url\}\}/"$canonical_url"}"
|
||||
page_contents="${page_contents//\{\{base_url\}\}/"$base_url"}"
|
||||
page_contents="${page_contents//\{\{global_name\}\}/"$global_name"}"
|
||||
|
||||
echo "$page_contents" > $1
|
||||
page_contents=""
|
||||
}
|
||||
}
|
||||
25
lib/functions/inline/clean_xml_string.sh
Normal file
25
lib/functions/inline/clean_xml_string.sh
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Cleans special characters out of a string intended for use in xml format
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# clean_xml_string "string with special characters"
|
||||
|
||||
clean_xml_string() {
|
||||
# unclean string -> clean string
|
||||
input_string="$1"
|
||||
# replace & with &
|
||||
input_string=${input_string//\&/\&}
|
||||
# replace < with <
|
||||
input_string=${input_string//</\<}
|
||||
# replace > with >
|
||||
input_string=${input_string//>/\>}
|
||||
# replace ' with '
|
||||
input_string=${input_string//\'/\'}
|
||||
# replace " with "
|
||||
input_string=${input_string//\"/\"}
|
||||
echo "$input_string"
|
||||
}
|
||||
22
lib/functions/inline/clear_metadata.sh
Normal file
22
lib/functions/inline/clear_metadata.sh
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Clears the metadata variables to prevent metadata from carrying over to the wrong page. This is important because of how promiscuous bash is with its variables.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# clear_metadata
|
||||
|
||||
clear_metadata() {
|
||||
metadata=''
|
||||
title=''
|
||||
author=''
|
||||
description=''
|
||||
language=''
|
||||
thumbnail=''
|
||||
published_date=''
|
||||
modified_date=''
|
||||
relative_url=''
|
||||
canonical_url=''
|
||||
}
|
||||
96
lib/functions/inline/get_page_metadata.sh
Normal file
96
lib/functions/inline/get_page_metadata.sh
Normal file
|
|
@ -0,0 +1,96 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Pulls all the Arise-specific metadata from the header of a given page.
|
||||
#
|
||||
# This function is meant to be run inline before other functions so that it can populate the information other functions need to operate upon.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# get_page_metadata source.md
|
||||
|
||||
get_page_metadata() {
|
||||
if [[ -e $1 ]]; then
|
||||
metadata=$(sed -e '/END ARISE/,$d' < $1)
|
||||
|
||||
# Main page metadata
|
||||
title="$(grep "Title::" <<< $metadata)" # Grab the line with the metadata we want
|
||||
title="${title%\"}" # Remove the trailing quote at the end
|
||||
title="${title#Title:: }" # Remove the name of the metadata variable from the start
|
||||
title="${title#\"}" # Remove the quote at the start of the parsed variable
|
||||
|
||||
author="$(grep "Author::" <<< $metadata)"
|
||||
author="${author%\"}"
|
||||
author="${author#Author:: }"
|
||||
author="${author#\"}"
|
||||
|
||||
description="$(grep "Description::" <<< $metadata)"
|
||||
description="${description%\"}"
|
||||
description="${description#Description:: }"
|
||||
description="${description#\"}"
|
||||
|
||||
language="$(grep "Language::" <<< $metadata)"
|
||||
language="${language%\"}"
|
||||
language="${language#Language:: }"
|
||||
language="${language#\"}"
|
||||
|
||||
thumbnail="$(grep "Thumbnail::" <<< $metadata)"
|
||||
thumbnail="${thumbnail%\"}"
|
||||
thumbnail="${thumbnail#Thumbnail:: }"
|
||||
thumbnail="${thumbnail#\"}"
|
||||
|
||||
published_date="$(grep "Published Date::" <<< $metadata)"
|
||||
published_date="${published_date%\"}"
|
||||
published_date="${published_date#Published Date:: }"
|
||||
published_date="${published_date#\"}"
|
||||
|
||||
modified_date="$(grep "Modified Date::" <<< $metadata)"
|
||||
modified_date="${modified_date%\"}"
|
||||
modified_date="${modified_date#Modified Date:: }"
|
||||
modified_date="${modified_date#\"}"
|
||||
|
||||
# Clean metadata of XML special characters so we don't break the sitemap or RSS feed
|
||||
title="$(clean_xml_string "$title")"
|
||||
author="$(clean_xml_string "$author")"
|
||||
description="$(clean_xml_string "$description")"
|
||||
language="$(clean_xml_string "$language")"
|
||||
thumbnail="$(clean_xml_string "$thumbnail")"
|
||||
published_date="$(clean_xml_string "$published_date")"
|
||||
modified_date="$(clean_xml_string "$modified_date")"
|
||||
|
||||
# Optional page settings with default settings
|
||||
|
||||
# is_toc default: false
|
||||
is_toc=$(grep "toc::" <<< $metadata | cut -d '"' -f2)
|
||||
if [[ $is_toc != "true" ]]; then
|
||||
is_toc="false"
|
||||
fi
|
||||
|
||||
# process_markdown default: true
|
||||
process_markdown=$(grep "process_markdown::" <<< $metadata | cut -d '"' -f2)
|
||||
if [[ $process_markdown != "false" ]]; then
|
||||
process_markdown="true"
|
||||
fi
|
||||
|
||||
# content_header default: true
|
||||
content_header=$(grep "content_header::" <<< $metadata | cut -d '"' -f2)
|
||||
if [[ $content_header != "false" ]]; then
|
||||
content_header="true"
|
||||
fi
|
||||
|
||||
# rss_hide default: false
|
||||
rss_hide=$(grep "rss_hide::" <<< $metadata | cut -d '"' -f2)
|
||||
if [[ $rss_hide != "true" ]]; then
|
||||
rss_hide="false"
|
||||
fi
|
||||
|
||||
# URL
|
||||
relative_url="$(realpath $(dirname $1) | sed 's@.*arise-out@@g')"'/'
|
||||
canonical_url="$base_url""$relative_url"
|
||||
else
|
||||
# Clear out metadata so that anything calling this function expecting to get new data cannot get old values on accident if the requested file does not exist.
|
||||
clear_metadata
|
||||
fi
|
||||
|
||||
}
|
||||
35
lib/functions/subshell/build_page.sh
Normal file
35
lib/functions/subshell/build_page.sh
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Builds a page from a source .md file and outputs the built version to 'index.html' in the same directory
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_page source.md
|
||||
|
||||
build_page() (
|
||||
|
||||
# Switch to page directory
|
||||
page=$(basename $1)
|
||||
cd $(dirname $1)
|
||||
|
||||
get_page_metadata $page
|
||||
|
||||
if [[ $is_toc == "true" ]]; then
|
||||
build_toc $page
|
||||
elif [[ $process_markdown == "false" ]]; then
|
||||
build_header index.html
|
||||
cat $page | sed -e '1,/END ARISE/d' | cat >> index.html
|
||||
build_footer index.html
|
||||
else
|
||||
build_header index.html
|
||||
# Grab everything after the Arise metadata block, run it through pandoc to convert to html, and append to our file in progress
|
||||
cat $page | sed -e '1,/END ARISE/d' | pandoc -f markdown -t html >> index.html
|
||||
build_footer index.html
|
||||
fi
|
||||
|
||||
# Inline Evaluations - DISABLED, WIP, ENABLE AT YOUR OWN PERIL
|
||||
# evaluate_inline index.html
|
||||
|
||||
)
|
||||
22
lib/functions/subshell/build_page_tree.sh
Normal file
22
lib/functions/subshell/build_page_tree.sh
Normal file
|
|
@ -0,0 +1,22 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Builds all pages on the site by calling "build_page" for every markdown file it can find outside of /config.
|
||||
#
|
||||
# Note that this function actually takes the root site directory to recursively build from as an argument.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_page /path/to/arise-out/
|
||||
|
||||
build_page_tree() (
|
||||
cd $1
|
||||
|
||||
find . -type f -name "index.md" -not \( -path ./config -prune \) | while read fname; do
|
||||
build_page $fname
|
||||
|
||||
# Add the source file to the list of files to remove in cleanup
|
||||
echo "$(realpath $fname)" >> $removelist
|
||||
done
|
||||
)
|
||||
78
lib/functions/subshell/build_rss.sh
Normal file
78
lib/functions/subshell/build_rss.sh
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Recursively crawls through the site and reads page metadata to generate an RSS feed for all content on the website.
|
||||
#
|
||||
# The script will output the completed RSS feed to the location specified as an argument.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_rss rss.xml
|
||||
|
||||
build_rss() (
|
||||
|
||||
# Switch to rss file's directory
|
||||
touch $1
|
||||
rss=$(realpath $1)
|
||||
cd $(dirname $1)
|
||||
|
||||
# Wipe out the existing rss feed, if there is one, and declare our new rss feed
|
||||
# Note that metadata descriptors are pulled from the index.md file that lives in the same folder as the destination for the rss.xml file.
|
||||
get_page_metadata index.md
|
||||
cat > $rss <<EOF
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
|
||||
<channel>
|
||||
<title>$global_name</title>
|
||||
<description>$description</description>
|
||||
<link>$base_url</link>
|
||||
<language>$language</language>
|
||||
<generator>Arise</generator>
|
||||
EOF
|
||||
|
||||
[[ $favicon != '' ]] && {
|
||||
cat >> $rss <<EOF
|
||||

|
||||
EOF
|
||||
}
|
||||
|
||||
cat >> $rss <<EOF
|
||||
<atom:link href="$base_url/rss.xml" rel="self" type="application/rss+xml"/>
|
||||
<ttl>60</ttl>
|
||||
<lastBuildDate>$(date --rfc-822)</lastBuildDate>
|
||||
EOF
|
||||
|
||||
# List every directory in our sitemap (except config). This makes up our sitemap since Arise is built to use directory roots as page URLs
|
||||
find . -type d -not \( -path ./config -prune \) | while read fname; do
|
||||
page_index=$(realpath "$fname"'/index.md')
|
||||
|
||||
if [ -e $page_index ]; then
|
||||
get_page_metadata $page_index
|
||||
|
||||
if [[ $rss_hide != "true" ]] && [[ $is_toc != "true" ]]; then
|
||||
# Convert html's ISO8601 date to RSS's RFC-822. Fuck you RSS.
|
||||
rss_date=$(date -d "$published_date" --rfc-822)
|
||||
|
||||
cat >> $rss <<EOF
|
||||
<item>
|
||||
<title>$title</title>
|
||||
<dc:creator>$author</dc:creator>
|
||||
<description>$description</description>
|
||||
<link>$canonical_url</link>
|
||||
<pubDate>$rss_date</pubDate>
|
||||
</item>
|
||||
EOF
|
||||
fi
|
||||
clear_metadata
|
||||
fi
|
||||
done
|
||||
|
||||
# Close up the rss feed
|
||||
echo -e '</channel>\n</rss>' >> $rss
|
||||
|
||||
)
|
||||
44
lib/functions/subshell/build_sitemap.sh
Normal file
44
lib/functions/subshell/build_sitemap.sh
Normal file
|
|
@ -0,0 +1,44 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Automatically generates a sitemap at the given file location.
|
||||
#
|
||||
# Note that this function will map out your site using the specified location as the root of its mapping crawl. If you define a sitemap location in a subdirectory of your website, it will only map subfolders of that location.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_sitemap /path/to/sitemap.xml
|
||||
|
||||
build_sitemap() (
|
||||
|
||||
# Switch to sitemap directory
|
||||
touch $1
|
||||
sitemap=$(basename $1)
|
||||
cd $(dirname $1)
|
||||
|
||||
# Wipe out the existing sitemap, if there is one, and declare our new sitemap
|
||||
echo '<?xml version="1.0" encoding="UTF-8"?>' > $sitemap
|
||||
echo '<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">' >> $sitemap
|
||||
|
||||
# List every directory in our sitemap (except config). This makes up our sitemap since Arise is built to use directory roots as page URLs
|
||||
find . -type d -not \( -path ./config -prune \) | while read fname; do
|
||||
|
||||
# Rewrite the local path from the find command as the live web URL as the <loc> tag for the sitemap standard
|
||||
echo -e '<url>\n<loc>'"$base_url"'/'"$(echo $fname | sed -n -e 's|\.\/||p')"'</loc>' >> $sitemap
|
||||
|
||||
# If this page contains a Arise-style index page with a date modified, include that as a <lastmod> for the sitemap standard
|
||||
modified_date=''
|
||||
get_page_metadata $fname/index.md
|
||||
if [ -n "$modified_date" ]; then
|
||||
echo '<lastmod>'"$modified_date"'</lastmod>' >> $sitemap
|
||||
fi
|
||||
clear_metadata
|
||||
|
||||
# Close the <url> tag for the current URL being looped through
|
||||
echo '</url>' >> $sitemap
|
||||
done
|
||||
|
||||
# Close up the sitemap
|
||||
echo '</urlset>' >> $sitemap
|
||||
)
|
||||
51
lib/functions/subshell/build_toc.sh
Normal file
51
lib/functions/subshell/build_toc.sh
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Creates a table of contents at the location of the specified source file.
|
||||
#
|
||||
#############
|
||||
# Usage:
|
||||
# build_toc index.md
|
||||
|
||||
build_toc() (
|
||||
|
||||
# Throw the metadata header together and add the source file to the list of files to remove in cleanup
|
||||
toc_source=$(basename $1)
|
||||
cd $(dirname $1)
|
||||
get_page_metadata $toc_source
|
||||
echo "$(realpath $toc_source)" >> $removelist
|
||||
build_header index.html
|
||||
|
||||
# Add the title and start of the table for the TOC
|
||||
cat >> index.html <<EOF
|
||||
<h1>$title</h1>
|
||||
<p id="arise-toc">
|
||||
<table id="arise-toc-table">
|
||||
<tr class="arise-toc-tr">
|
||||
<th class="arise-toc-th">Date</th>
|
||||
<th class="arise-toc-th">Title</th>
|
||||
<th class="arise-toc-th">Description</th>
|
||||
</tr>
|
||||
EOF
|
||||
clear_metadata
|
||||
|
||||
# Make each entry into an individual table row. For now we're storing these in a temp file so that we can sort if after we're done generating all the entries in the TOC.
|
||||
toc_tmp="arise-toc-$RANDOM.tmp"
|
||||
find . -mindepth 2 -maxdepth 2 -type f -name 'index.md' | while read fname; do
|
||||
get_page_metadata $fname
|
||||
echo '<tr class="arise-toc-tr"><td class="arise-toc-td">'"$published_date"'</td><td class="arise-toc-td"><a href="'"$canonical_url"'">'"$title"'</a></td><td class="arise-toc-td">'"$description"'</td></tr>' >> $toc_tmp
|
||||
clear_metadata
|
||||
done
|
||||
|
||||
# Sort all of our contents by date so that they're not in random order
|
||||
sort -r $toc_tmp >> index.html
|
||||
rm $toc_tmp
|
||||
|
||||
# Final page bits
|
||||
cat >> index.html <<EOF
|
||||
</table>
|
||||
</p>
|
||||
EOF
|
||||
build_footer index.html
|
||||
)
|
||||
26
lib/functions/subshell/evaluate_inline.sh
Normal file
26
lib/functions/subshell/evaluate_inline.sh
Normal file
|
|
@ -0,0 +1,26 @@
|
|||
#!/bin/bash
|
||||
#############
|
||||
# DESCRIPTION
|
||||
#############
|
||||
# Evaluates inline bash snippets when building pages.
|
||||
#
|
||||
# This functionality is currently disabled for initial release because even though it "works". When called by build_page, the parsing isn't very good and the syntax for calling an inline evaluation could use some work.
|
||||
#
|
||||
#############
|
||||
# Function Usage:
|
||||
# evaluate_inline index.html
|
||||
#
|
||||
# Inline Snippet Usage:
|
||||
# <pre>sh# echo "Hello World!" </pre>
|
||||
|
||||
evaluate_inline() (
|
||||
|
||||
evaluation_source=$(basename $1)
|
||||
cd $(dirname $1)
|
||||
|
||||
while grep "<pre>sh#" $evaluation_source
|
||||
do
|
||||
replacement=$(bash <<< $(sed -n -e s$'\001''<pre>sh#\(.*\)</pre>'$'\001''\1'$'\001''p' < $evaluation_source | head -1))
|
||||
awk 'NR==1,/<pre>sh#.*<\/pre>/{sub(/<pre>sh#.*<\/pre>/, "'"$replacement"'")}{print >"'"$evaluation_source"'"}' $evaluation_source || break
|
||||
done
|
||||
)
|
||||
Loading…
Add table
Add a link
Reference in a new issue