- What is this?
- Assumptions
- What's in here?
- Bootstrap the project
- Hide project secrets
- Save media assets
- Add a page to the site
- Run the project
- Scripts for images
- COPY editing
- Arbitrary Google Docs
- Run Python tests
- Run Javascript tests
- Compile static assets
- Test the rendered app
- Deploy to S3
- Deploy to EC2
- Install cron jobs
- Install web services
- Run a remote fab command
- Report analytics
Show how Google Maps renders disputed territories differently depending on who's looking (in-progress for #owhack)
http://opennews.kzhu.io/map-disputes/
This project was built for the Knight-Mozilla-MIT "The Open Internet" Hack Day, held June 21-22, 2014.
Contributors:
- Jose Dominguez (@josmas)
- Alyson Hurt (@alykat)
- Gus Wezerek (@gwezerek)
- Katie Zhu (@ktzhu)
What we did:
- Identified disputed territories (Natural Earth Data disputed territories shapefile data)
- Removed locations that did not reflect unusual borders in Google Maps
- Pulled descriptions from Wikipedia
- Wrote a script to screencapture maps on various country instances of Google Maps
- Built a website
Things left to do:
- Connect image-cropping tool to the Google Spreadsheet (everything currently hard-coded)
- Better highlight where the disputed boundaries are (either via an image filter on the screencapped maps, a separate map highlighting the border in question, or an animated gif).
- Better automate adding disputed territories to the list and identifying their lat/lon. (Intended to generate this off the shapefile, but ended up doing it manually.)
Data references:
- Breakaway, Disputed Areas data layer from Natural Earth Data
- Wikipedia
- Google Maps
Tools:
- Website built using the NPR Visuals app-template as a starting point
The following things are assumed to be true in this documentation.
- You are running OSX.
- You are using Python 2.7. (Probably the version that came OSX.)
- You have virtualenv and virtualenvwrapper installed and working.
- You have AWS credentials stored as environment variables locally.
For more details on the technology stack used with the NPR Visuals app-template, see their development environment blog post.
The project contains the following folders and important files:
confs
-- Server configuration files for nginx and uwsgi. Edit the templates thenfab <ENV> servers.render_confs
, don't edit anything inconfs/rendered
directly.data
-- Data files, such as those used to generate HTML.fabfile
-- Fabric commands for automating setup, deployment, data processing, etc.etc
-- Miscellaneous scripts and metadata for project bootstrapping.jst
-- Javascript (Underscore.js) templates.less
-- LESS files, will be compiled to CSS and concatenated for deployment.templates
-- HTML (Jinja2) templates, to be compiled locally.tests
-- Python unit tests.www
-- Static and compiled assets to be deployed. (a.k.a. "the output")www/assets
-- A symlink to an S3 bucket containing binary assets (images, audio).www/live-data
-- "Live" data deployed to S3 via cron jobs or other mechanisms. (Not deployed with the rest of the project.)www/test
-- Javascript tests and supporting files.app.py
-- A Flask app for rendering the project locally.app_config.py
-- Global project configuration for scripts, deployment, etc.copytext.py
-- Code supporting the Editing workflowcrontab
-- Cron jobs to be installed as part of the project.public_app.py
-- A Flask app for running server-side code.render_utils.py
-- Code supporting template rendering.requirements.txt
-- Python requirements.static.py
-- Static Flask views used in bothapp.py
andpublic_app.py
.
Node.js is required for the static asset pipeline. If you don't already have it, get it like this:
brew install node
curl https://npmjs.org/install.sh | sh
Then bootstrap the project:
cd map-disputes
mkvirtualenv --no-site-packages map-disputes
pip install -r requirements.txt
npm install less universal-jst
fab update
Problems installing requirements? You may need to run the pip command as ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future pip install -r requirements.txt
to work around an issue with OSX.
To generate images you will need phantom.js
brew install phantomjs
and also capturejs:
npm install -g capturejs
Project secrets should never be stored in app_config.py
or anywhere else in the repository. They will be leaked to the client if you do. Instead, always store passwords, keys, etc. in environment variables and document that they are needed here in the README.
Large media assets (images, videos, audio) are synced with an Amazon S3 bucket called assets.apps.npr.org
in a folder with the name of the project. This allows everyone who works on the project to access these assets without storing them in the repo, giving us faster clone times and the ability to open source our work.
Syncing these assets requires running a couple different commands at the right times. When you create new assets or make changes to current assets that need to get uploaded to the server, run fab assets.sync
. This will do a few things:
- If there is an asset on S3 that does not exist on your local filesystem it will be downloaded.
- If there is an asset on that exists on your local filesystem but not on S3, you will be prompted to either upload (type "u") OR delete (type "d") your local copy.
- You can also upload all local files (type "la") or delete all local files (type "da"). Type "c" to cancel if you aren't sure what to do.
- If both you and the server have an asset and they are the same, it will be skipped.
- If both you and the server have an asset and they are different, you will be prompted to take either the remote version (type "r") or the local version (type "l").
- You can also take all remote versions (type "ra") or all local versions (type "la"). Type "c" to cancel if you aren't sure what to do.
Unfortunantely, there is no automatic way to know when a file has been intentionally deleted from the server or your local directory. When you want to simultaneously remove a file from the server and your local environment (i.e. it is not needed in the project any longer), run fab assets.rm:"www/assets/file_name_here.jpg"
A site can have any number of rendered pages, each with a corresponding template and view. To create a new one:
- Add a template to the
templates
directory. Ensure it extends_base.html
. - Add a corresponding view function to
app.py
. Decorate it with a route to the page name, i.e.@app.route('/filename.html')
- By convention only views that end with
.html
and do not start with_
will automatically be rendered when you callfab render
.
A flask app is used to run the project locally. It will automatically recompile templates and assets on demand.
workon $PROJECT_SLUG
python app.py
Visit localhost:8000 in your browser.
There is a script in the scripts folder to generate map images based on location. It uses capturejs and PIL.
- Install capturejs with :
npm install capturejs -g
- PIL is already in the requirements file
**Run the script: ** cd scripts; python grab_images
Note: WIP : The script currently contains hardcoded data for images.
This app uses a Google Spreadsheet for a simple key/value store that provides an editing workflow.
View the sample copy spreadsheet.
This document is specified in app_config
with the variable COPY_GOOGLE_DOC_KEY
. To use your own spreadsheet, change this value to reflect your document's key (found in the Google Docs URL after &key=
).
A few things to note:
- If there is a column called
key
, there is expected to be a column calledvalue
and rows will be accessed in templates as key/value pairs - Rows may also be accessed in templates by row index using iterators (see below)
- You may have any number of worksheets
- This document must be "published to the web" using Google Docs' interface
The app template is outfitted with a few fab
utility functions that make pulling changes and updating your local data easy.
To update the latest document, simply run:
fab copytext.update
Note: copytext.update
runs automatically whenever fab render
is called.
At the template level, Jinja maintains a COPY
object that you can use to access your values in the templates. Using our example sheet, to use the byline
key in templates/index.html
:
{{ COPY.attribution.byline }}
More generally, you can access anything defined in your Google Doc like so:
{{ COPY.sheet_name.key_name }}
You may also access rows using iterators. In this case, the column headers of the spreadsheet become keys and the row cells values. For example:
{% for row in COPY.sheet_name %}
{{ row.column_one_header }}
{{ row.column_two_header }}
{% endfor %}
When naming keys in the COPY document, pleaseattempt to group them by common prefixes and order them by appearance on the page. For instance:
title
byline
about_header
about_body
about_url
download_label
download_url
Sometimes, our projects need to read data from a Google Doc that's not involved with the COPY rig. In this case, we've got a class for you to download and parse an arbitrary Google Doc to a CSV.
This solution will download the uncached version of the document, unlike those methods which use the "publish to the Web" functionality baked into Google Docs. Published versions can take up to 15 minutes up update!
First, export a valid Google username (email address) and password to your environment.
export APPS_GOOGLE_EMAIL=foo@gmail.com
export APPS_GOOGLE_PASS=MyPaSsW0rd1!
Then, you can load up the GoogleDoc
class in etc/gdocs.py
to handle the task of authenticating and downloading your Google Doc.
Here's an example of what you might do:
import csv
from etc.gdoc import GoogleDoc
def read_my_google_doc():
doc = {}
doc['key'] = '0ArVJ2rZZnZpDdEFxUlY5eDBDN1NCSG55ZXNvTnlyWnc'
doc['gid'] = '4'
doc['file_format'] = 'csv'
doc['file_name'] = 'gdoc_%s.%s' % (doc['key'], doc['file_format'])
g = GoogleDoc(**doc)
g.get_auth()
g.get_document()
with open('data/%s' % doc['file_name'], 'wb') as readfile:
csv_file = list(csv.DictReader(readfile))
for line_number, row in enumerate(csv_file):
print line_number, row
read_my_google_doc()
Google documents will be downloaded to data/gdoc.csv
by default.
You can pass the class many keyword arguments if you'd like; here's what you can change:
- gid AKA the sheet number
- key AKA the Google Docs document ID
- file_format (xlsx, csv, json)
- file_name (to download to)
See etc/gdocs.py
for more documentation.
Python unit tests are stored in the tests
directory. Run them with fab tests
.
With the project running, visit localhost:8000/test/SpecRunner.html.
Compile LESS to CSS, compile javascript templates to Javascript and minify all assets:
workon map-disputes
fab render
(This is done automatically whenever you deploy to S3.)
If you want to test the app once you've rendered it out, just use the Python webserver:
cd www
python -m SimpleHTTPServer
fab staging master deploy
You can deploy to EC2 for a variety of reasons. We cover two cases: Running a dynamic web application (public_app.py
) and executing cron jobs (crontab
).
Servers capable of running the app can be setup using our servers project.
For running a Web application:
- In
app_config.py
setDEPLOY_TO_SERVERS
toTrue
. - Also in
app_config.py
setDEPLOY_WEB_SERVICES
toTrue
. - Run
fab staging master servers.setup
to configure the server. - Run
fab staging master deploy
to deploy the app.
For running cron jobs:
- In
app_config.py
setDEPLOY_TO_SERVERS
toTrue
. - Also in
app_config.py
, setINSTALL_CRONTAB
toTrue
- Run
fab staging master servers.setup
to configure the server. - Run
fab staging master deploy
to deploy the app.
You can configure your EC2 instance to both run Web services and execute cron jobs; just set both environment variables in the fabfile.
Cron jobs are defined in the file crontab
. Each task should use the cron.sh
shim to ensure the project's virtualenv is properly activated prior to execution. For example:
* * * * * ubuntu bash /home/ubuntu/apps/map_disputes/repository/cron.sh fab $DEPLOYMENT_TARGET cron_jobs.test
To install your crontab set INSTALL_CRONTAB
to True
in app_config.py
. Cron jobs will be automatically installed each time you deploy to EC2.
The cron jobs themselves should be defined in fabfile/cron_jobs.py
whenever possible.
Web services are configured in the confs/
folder.
Running fab servers.setup
will deploy your confs if you have set DEPLOY_TO_SERVERS
and DEPLOY_WEB_SERVICES
both to True
at the top of app_config.py
.
To check that these files are being properly rendered, you can render them locally and see the results in the confs/rendered/
directory.
fab servers.render_confs
You can also deploy only configuration files by running (normally this is invoked by deploy
):
fab servers.deploy_confs
Sometimes it makes sense to run a fabric command on the server, for instance, when you need to render using a production database. You can do this with the fabcast
fabric command. For example:
fab staging master servers.fabcast:deploy
If any of the commands you run themselves require executing on the server, the server will SSH into itself to run them.
The Google Analytics events tracked in this application are:
Category | Action | Label | Value | Custom 1 | Custom 2 |
---|---|---|---|---|---|
map-disputes | tweet | location |
|||
map-disputes | location |
||||
map-disputes | comments_opened | ||||
map-disputes | comments_closed | seconds_open |
|||
map-disputes | comments_open_for | seconds_open |
Notes:
- The comments_read action is fired once the comments pane has been open for at least ten seconds.