https://wiki.paskvil.com/index.php?title=Special:NewPages&feed=atom&hideredirs=1&limit=50&offset=&namespace=0&username=&tagfilter=PaskvilWiki - New pages [en]2024-03-29T00:54:09ZFrom PaskvilWikiMediaWiki 1.22.0https://wiki.paskvil.com/index.php/Short_Notes_on_PSQLShort Notes on PSQL2020-04-08T15:46:30Z<p>Admin: /* Set Sequence Value to Max of Table's ID */</p>
<hr />
<div>PSQL aka Postgres aka PostgreSQL... I prefer psql.<br />
<br />
== Dump and Restore ==<br />
<br />
<pre>dump:<br />
$ pg_dump -h host -p 5432 -U user -F c -b -v -f /tmp/db_name.backup db_name<br />
restore:<br />
$ pg_restore -h host -p 5432 -U user -d db_name -v /tmp/db_name.backup</pre><br />
<br />
== Delete Duplicate Lines ==<br />
<br />
Before you can add a <tt>unique</tt> constraint to a table, you have to make sure it does satisfy this criteria.<br />
<br />
With <tt>table_T</tt> and columns <tt>criteria_1</tt>, ..., <tt>criteria_N</tt>.<br />
<br />
<pre><br />
;;<br />
;; list rows that do not satisfy the uniqueness constraint<br />
;;<br />
SELECT<br />
criteria_1,<br />
...<br />
criteria_N,<br />
COUNT(*)<br />
FROM<br />
table_T<br />
GROUP BY<br />
criteria_1, ..., criteria_N<br />
HAVING<br />
COUNT(*) > 1<br />
ORDER BY<br />
criteria_1, ..., criteria_N;<br />
<br />
;;<br />
;; delete all rows that do not satisfy the constraint, keeping the ones with lowest id value<br />
;;<br />
DELETE FROM<br />
table_T a<br />
USING table_T b<br />
WHERE<br />
a.id > b.id<br />
AND a.criteria_1 = b.criteria_1<br />
...<br />
AND a.criteria_N = b.criteria_N;<br />
</pre><br />
<br />
== Set Sequence Value to Max of Table's ID ==<br />
<br />
<pre><br />
SELECT setval('table_id_seq', (SELECT MAX(id) FROM table));<br />
</pre><br />
<br />
== Find and Kill Stuck Queries ==<br />
<br />
To get list of queries that are running for more than 5 minutes:<br />
<pre><br />
SELECT <br />
pid,<br />
NOW() - pg_stat_activity.query_start AS duration,<br />
query,<br />
state<br />
FROM pg_stat_activity<br />
WHERE (NOW() - pg_stat_activity.query_start) > INTERVAL '5 MINUTES';<br />
</pre><br />
<br />
Kill these by PID:<br />
<pre><br />
SELECT pg_terminate_backend(_pid_);<br />
</pre></div>Adminhttps://wiki.paskvil.com/index.php/Short_Notes_on_JSShort Notes on JS2020-01-24T12:21:35Z<p>Admin: </p>
<hr />
<div>== Simple Mapping ==<br />
<br />
<pre><br />
let a = {a: 1, b: 2, c: 3};<br />
<br />
Object.keys(a).map(i => console.log(i));<br />
> a<br />
> b<br />
> c<br />
>> [undefined, undefined, undefined]<br />
<br />
a = [1, 2, 3];<br />
a.map((i, k) => console.log(i, k));<br />
> 1 0<br />
> 2 1<br />
> 3 2<br />
>> [undefined, undefined, undefined]<br />
<br />
// fill() is required here, as it materializes the Array;<br />
// otherwise, Array(8) would be treated as empty array by map()<br />
Array(8).fill().map((_, i) => i * i);<br />
>> [0, 1, 4, 9, 16, 25, 36, 49]<br />
<br />
== Fetch Patterns ==<br />
<br />
<pre>fetch("url", { options... })<br />
.then((response) => {<br />
if (!response.ok)<br />
throw new Error('Network response was not ok');<br />
return response.json(); // or response.blob(), etc.<br />
})<br />
.then((data) => {<br />
// do something with the data received<br />
})<br />
.catch((error) => {<br />
console.error('Failed to fetch:', error);<br />
});</pre><br />
<br />
== Environment Variables in WebPack ==<br />
<br />
WebPack does not have access to environment (duh), so you need to "bake" any relevant environment variables in the WebPack during build:<br />
<pre>new webpack.DefinePlugin({<br />
'process.env': {<br />
NODE_ENV: JSON.stringify(process.env.NODE_ENV),<br />
STAGE: JSON.stringify(process.env.STAGE),<br />
// ...<br />
}<br />
})</pre><br />
<br />
== Debugging in VS Code on Linux ==<br />
<br />
* install ''Debugger for Chrome'' extension<br />
* open the ''launch.json'' file, and edit as follows:<br />
** custom profile - ''user-data-dir'' - is used, to make sure you have clean (enough) slate<br />
<pre>{<br />
// Use IntelliSense to learn about possible attributes.<br />
// Hover to view descriptions of existing attributes.<br />
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387<br />
"version": "0.2.0",<br />
"configurations": [<br />
{<br />
"type": "chrome",<br />
"request": "launch",<br />
"runtimeExecutable": "/usr/bin/chromium-browser",<br />
"runtimeArgs": ["--remote-debugging-port=9222", "--user-data-dir=/home/user/tmp/remote-profile"],<br />
"name": "Launch Chrome against localhost",<br />
"url": "http://localhost:3000",<br />
"webRoot": "${workspaceFolder}"<br />
}<br />
]<br />
}</pre><br />
* run the npm (the port used should correspond to the ''url'' key above)<br />
* hit debug in VS Code, add breakpoints, etc., and enjoy!<br />
** this will actually open new browser window, as subwindow of VS Code, neat!</div>Adminhttps://wiki.paskvil.com/index.php/Short_Notes_on_WaveletsShort Notes on Wavelets2017-02-24T13:14:45Z<p>Admin: /* numpy Version */</p>
<hr />
<div>== Integer Haar Wavelets, Python implementation ==<br />
<br />
The code provide is in no way optimized for speed - it's creating too many temporaries and duplicates.<br />
<br />
This code is for illustration only, and optimization for speed is left to the reader as an exercise.<br />
<br />
=== 1D Case ===<br />
<br />
This is a trivial implementation of Haar integer-to-integer wavelets.<br />
<br />
The ''d'' array (''list'' or ''numpy.array'') '''has''' to be power of 2.<br />
<br />
Note that resulting values typically use 1 more bit than original ones - if source values are in [0..N) interval, then resulting values are in (-N, N) interval.<br />
<br />
==== Using Lists ====<br />
<br />
<pre>def haar_int_fwd_1d(d):<br />
if len(d) == 1:<br />
return d<br />
even = d[::2]<br />
odd = d[1::2]<br />
hp = [j - i for i, j in zip(even, odd)]<br />
lp = [i + (w >> 1) + (w % 2) for i, w in zip(even, hp)]<br />
return haar_int_fwd_1d(lp) + hp<br />
<br />
def haar_int_inv_1d(d):<br />
if len(d) == 1:<br />
return d<br />
even = haar_int_inv_1d(d[:len(d) >> 1])<br />
odd = d[len(d) >> 1:]<br />
lp = [i - (j >> 1) - (j % 2) for i, j in zip(even, odd)]<br />
hp = [i + j for i, j in zip(lp, odd)]<br />
return [x for t in zip(lp, hp) for x in t]</pre><br />
<br />
==== Using numpy Array's ====<br />
<br />
<pre>import numpy as np<br />
<br />
def haar_int_fwd_1d_np(d):<br />
if len(d) == 1:<br />
return d<br />
hp = d[1::2] - d[::2]<br />
lp = d[::2] + (hp >> 1) + (hp % 2)<br />
return np.concatenate((haar_int_fwd_1d_np(lp), hp))<br />
<br />
def haar_int_inv_1d_np(d):<br />
if len(d) == 1:<br />
return d<br />
lp = haar_int_inv_1d_np(d[:len(d) >> 1]) <br />
hp = d[len(d) >> 1:] <br />
even = lp - (hp >> 1) - (hp % 2)<br />
return np.ravel(np.column_stack((even, even + hp)))</pre><br />
<br />
=== 2D Extension ===<br />
<br />
The list-based version is not provided.<br />
<br />
==== numpy Version ====<br />
<br />
This is highly '''un'''-optimized - it creates copy for each step of transformation, and both directions of transformation use ''np.apply_along_axis'', which I suspect builds the new array row by row.<br />
<br />
Both versions could be parallelised and made into in-place transforms. Enjoy!<br />
<br />
<pre>def haar_int_fwd_2d_np(d):<br />
tmp = np.apply_along_axis(haar_int_fwd_1d_np, 0, d)<br />
return np.apply_along_axis(haar_int_fwd_1d_np, 1, tmp)<br />
<br />
def haar_int_inv_2d_np(d):<br />
tmp = np.apply_along_axis(haar_int_inv_1d_np, 1, d)<br />
return np.apply_along_axis(haar_int_inv_1d_np, 0, tmp)</pre></div>Adminhttps://wiki.paskvil.com/index.php/Short_Notes_on_Flask_and_Flask-RestPlusShort Notes on Flask and Flask-RestPlus2017-02-09T12:02:41Z<p>Admin: </p>
<hr />
<div>== Structuring the Project ==<br />
<br />
== Handling Requests ==<br />
<br />
== File Upload ==<br />
<br />
<pre>@ns.route('/')<br />
class Handle(Resource):<br />
@ns.param('data', description='We will just return this.', _in='formData', type='string', required=True)<br />
@ns.param('file', description='We will save this as /tmp/test.jpg.', _in='formData', type='file', required=True)<br />
def post(self):<br />
with open('/tmp/test.jpg', 'wb') as f:<br />
f.write(request.files['file'].read())<br />
return {'data': request.form['data']}</pre><br />
<br />
== "MySQL server has gone away" with (Flask-)SQLAlchemy ==<br />
<br />
On infrequently used connections, most cloud hosts kill unused DB connections after some time.<br />
<br />
This is a problem if your SQLAlchemy connection pool is not recycled often enough.<br />
<br />
You can just lower the recycle time to solve this. It is also good to increase the pool size to handle the case where your load increases (at times):<br />
<br />
<pre>app.config['SQLALCHEMY_POOL_SIZE'] = 100<br />
app.config['SQLALCHEMY_POOL_RECYCLE'] = 240</pre><br />
<br />
Default is <code>size = 10, recycle = 7200</code>, but many cloud providers kill inactive connections after about 5 min, thus the 4 min period above.</div>Adminhttps://wiki.paskvil.com/index.php/Short_Notes_on_AWSShort Notes on AWS2017-01-23T14:45:54Z<p>Admin: /* AWS Chalice */</p>
<hr />
<div>== Can't connect to EC2 instance ==<br />
<br />
The obvious 2 problems with incoming requests, that are outside of AWS's scope:<br />
* check the instance's firewall<br />
* check that the app is listening to all incoming (0.0.0.0/0 or your IP, not just 127.0.0.1)<br />
<br />
On the AWS side, check the following:<br />
* make sure the Elastic IP is associated with the instance<br />
** find the instance in the EC2 > Instances<br />
** look under Description tab, Elastic IP<br />
** if it's not, go to EC2 > Elastic IPs<br />
** choose Elastic IP from the list (or, allocate new on) that is not associated with any instance<br />
** choose Actions > Associate address, and associate it with the instance<br />
* make sure Security group permissions allow the connection<br />
** go to EC2 > Security Groups<br />
** select the security group (you can find which security group instance is in in the list on EC2 > Instances page, last column)<br />
** on the Inbound tab, check that your protocol is enabled for Source 0.0.0.0/0 (or from your IP)<br />
* make sure your Internet Gateway is connected to your VPC<br />
** make sure the Internet Gateway is attached to your VPC, under VPC > Internet Gateways > Summary tab<br />
** go to VPC > Route Tables, select route table for your VPC<br />
** under Routes tab, make sure that route with destination 0.0.0.0/0, with Target being your internet gateway, exists and is Active<br />
<br />
== Authorization header being removed by ElasticBeanstalk ==<br />
<br />
By default, AWS ElasticBeanstalk's WSGI server strips <code>Authorization</code> header from requests.<br />
<br />
To get these back, just plug your wsgi config file through <code>.ebextensions</code>, adding a <code>wsgi.authorization.config</code> file, with the following content:<br />
<pre>files:<br />
"/etc/httpd/conf.d/wsgiauth.conf":<br />
mode: "000644"<br />
owner: root<br />
group: root<br />
content: |<br />
WSGIPassAuthorization On</pre><br />
<br />
== IAM Notes ==<br />
<br />
* you need policy AmazonRDSReadOnlyAccess for your IAM to be able to list RDS instances<br />
<br />
== AWS Lambda (Py) Notes ==<br />
<br />
For <tt>handler(event, context)</tt> function, parameters are in:<br />
<br />
* GET parameters in <tt>event["multiValueQueryStringParameters"]</tt><br />
** ''note'': parameters are stored in arrays, the lambda's parser correctly presumes that there may be multiple values; e.g. <tt>event["multiValueQueryStringParameters"] = {"param": ["value"]}</tt><br />
* path parameters in <tt>event["pathParameters"]</tt><br />
** ''note'': path parameters are specified in the SAM ''yaml'', under <tt>Path</tt> property, as <tt>Path: /v1/user/{user_id}/data/</tt><br />
* POST/PUT body in <tt>event["body"]</tt><br />
** ''note'': stored as string, you have to <tt>json.loads()</tt> or similar<br />
<br />
== boto3 Snippets ==<br />
<br />
=== Generate presigned S3 URL ===<br />
<br />
<pre>s3 = boto3.client('s3', config=botocore.client.Config(signature_version='s3v4', region_name=BUCKET_REGION))<br />
resp = s3.generate_presigned_url('get_object', Params={'Bucket': BUCKET, 'Key': KEY}, ExpiresIn=SECONDS)</pre><br />
<br />
== AWS Chalice ==<br />
<br />
Prepare your virtualenv, and install boto3 and Chalice in it.<br />
<br />
<pre>$ virtualenv ~/.virtualenvs/test-venv<br />
$ source ~/.virtualenvs/test-venv/bin/activate<br />
(test-venv) $ pip install boto3 chalice</pre><br />
<br />
=== Connection to RDS ===<br />
<br />
Lambda's are typically not restricted to VPC, while RDS is strictly tied to it.<br />
You need to assign your Chalice Lambda's to the same subnets as the RDS, and the same security group.<br />
Chalice does almost all of this for you!<br />
<br />
You just need to copy all subnet ID's used by your RDS, and the security group ID.<br />
<br />
<pre># .chalice/config.json:<br />
<br />
{<br />
"version": "2.0",<br />
"app_name": "chalice-test",<br />
"environment_variables": {<br />
"var": "value"<br />
},<br />
"layers": ["arn:aws:lambda:..."],<br />
"stages": {<br />
"dev": {<br />
"api_gateway_stage": "api",<br />
"subnet_ids": [<br />
"subnet-...", ...<br />
],<br />
"security_group_ids": [<br />
"sg-..."<br />
]<br />
}<br />
}<br />
}</pre><br />
<br />
=== Send and Download Binary Files ===<br />
<br />
There are two sides of this story, influencing both your Chalice code, and influencing the receiving code (your frontend app).<br />
<br />
On the Chalice side, you simply use the <tt>Response</tt> object with some of the registered binary types. I typically just use <tt>application/octet-stream</tt>...<br />
<br />
<pre><br />
@app.route('/binary-data')<br />
def bin_echo():<br />
raw_request_body = app.current_request.raw_body<br />
return Response(body=binary_data,<br />
status_code=200,<br />
headers={'Content-Type': 'application/octet-stream'})<br />
</pre><br />
[https://chalice.readthedocs.io/en/latest/topics/views.html#binary-content For more info see the Chalice documentation]<br />
<br />
On the frontend side, you need to make sure API Gateway does not mess you up - you need to specify <tt>Accept</tt> header of the binary type (not necessarily of the same type as the one returned from Chalice). If you specify <tt>*/*</tt> as the accepted type, you'll receive base64 encoded data.<br />
<br />
=== Getting 403 on a Route ===<br />
<br />
Apart of actual 403 response, Chalice will return 403 on a route that does not have the <tt>/</tt> prefix:<br />
<br />
<pre><br />
# this will return 403 :S<br />
@app.route('v1/test')<br />
def v1_test():<br />
return {}<br />
<br />
# this will work :)<br />
@app.route('/v1/test')<br />
def v1_test():<br />
return {}<br />
</pre><br />
<br />
== Building a Python Layer ==<br />
<br />
We'll be using docker Amazon Linux image.<br />
<br />
Below, commands starting with <tt>$</tt> are run on your machine, while those starting with <tt>bash-4.2#</tt> are run within the docker container.<br />
<br />
<pre>$ cat docker-compose.yaml <br />
version: "3"<br />
services:<br />
amzlinux:<br />
image: "amazonlinux"<br />
command: "sleep infinity"<br />
volumes:<br />
- ./:/host<br />
<br />
$ docker-compose up<br />
Creating network "layer-example_default" with the default driver<br />
Pulling amzlinux (amazonlinux:)...<br />
[...]<br />
Attaching to layer-example_amzlinux_1<br />
<br />
$ docker ps<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
a5107c00ed35 amazonlinux "sleep infinity" 10 seconds ago Up 8 seconds '''layer-example_amzlinux_1'''<br />
<br />
$ docker exec -it layer-example_amzlinux_1 /bin/bash<br />
<br />
bash-4.2# yum -y update<br />
>>> bash-4.2# yum -y groupinstall "Development Tools"<br />
>>> bash-4.2# yum -y install Cython<br />
<br />
bash-4.2# yum -y install python3-pip.noarch zip<br />
[...]<br />
Installed:<br />
python3-pip.noarch 0:9.0.3-1.amzn2.0.1<br />
<br />
bash-4.2# cd /host/<br />
<br />
bash-4.2# mkdir -p sqlalchemy-layer/python<br />
<br />
bash-4.2# pip3 install sqlalchemy -t sqlalchemy-layer/python/<br />
[...]<br />
Successfully installed sqlalchemy-1.3.11<br />
<br />
bash-4.2# pip3 install psycopg2-binary -t sqlalchemy-layer/python/<br />
[...]<br />
Successfully installed psycopg2-binary-2.8.4<br />
<br />
bash-4.2# cd sqlalchemy-layer/<br />
bash-4.2# zip -r aws-sqlalchemy-layer.zip python/ -x \*.pyc</pre><br />
<br />
Now you can upload the <tt>aws-sqlalchemy-layer.zip</tt> as a Layer through AWS console.</div>Adminhttps://wiki.paskvil.com/index.php/Short_Notes_on_gitShort Notes on git2016-10-26T12:48:31Z<p>Admin: </p>
<hr />
<div>== Basic Commands ==<br />
<br />
The ''git'' has 3 "trees" maintained automatically within your local repo - "working copy" is the files you're actually working on, "index" is kind of staging area, and "head" is your last actual commit.<br />
You're working on your ''working copy'', then add changes to the ''index'', then push those to ''head'' locally, and finally push those to (remote) repo.<br />
<br />
<pre># two ways to check out a repo; 1) first local, then connect<br />
$ git init # init local copy<br />
$ git remote add origin <server> # connect it to remote<br />
# or, 2) checkout a remote repo straight away<br />
$ git clone /path/to/repo # create working copy of local repo<br />
$ git clone user@host:/path/to/repo # create working copy of remote repo<br />
<br />
$ git status # get info about local copy, incl. branch ID, changes, etc.<br />
<br />
$ git fetch # download changes but do not integrate in the HEAD<br />
$ git pull # update the local copy to the latest revision<br />
$ git pull <remote> <branch> # pull specific branch<br />
$ git diff <source> <dest> # view differences between "source" and "dest" branches<br />
$ git merge <branch> # merge changes from another branch (e.g. "origin/master")<br />
# you should merge from remote ("origin") as your local copy may differ<br />
<br />
# set meld as diff and merge tool; use --global to apply globally, not just per-repo<br />
$ git config difftool meld<br />
$ git config difftool.prompt false # don't ask each time whether to run meld<br />
$ git config mergetool meld # don't ask each time whether to run meld<br />
$ git config mergetool.prompt false<br />
$ git difftool # you need to use "difftool" to run meld for diff, not just `git diff`<br />
$ git mergetool # run this only after `git merge` if there are any conflicts to be resolved<br />
<br />
# getting the changes to the remote repo<br />
$ git add [-i] <file> # add changes from file(s) to index (-i = interactive)<br />
$ git commit -m 'message' # commit changes from index to head<br />
$ git push origin master # push changes from head to master branch, or<br />
$ git push <remote> <branch> # push changes to <branch><br />
<br />
$ git checkout -- <file> # revert file and replace with current HEAD version, exc. of changes added to Index<br />
$ git fetch origin # fetch latest history from server<br />
$ git reset --hard origin/master # drop all local changes, incl. those in Index<br />
<br />
# branching<br />
$ git branch -av # list existing branches<br />
$ git checkout -b <branch> # create a new branch, and switch to it, or<br />
$ git checkout <branch> # switch to branch (make it your HEAD)<br />
$ git push origin <branch> --set-upstream # make the branch available to others, and set the upstream<br />
$ git branch -d <branch> # delete local branch<br />
$ git branch -dr <remote/branch> # delete remote branch<br />
<br />
# other stuff<br />
$ git tag <tag name> <commit id> # create tag from a commit<br />
$ git log --pretty=oneline [--author=<name>] # one-line per commit listing<br />
$ git log --graph --oneline --decorate --all # ascii tree of branches, tags, ...<br />
$ git log --name-status # list changed files<br />
<br />
# "svn:externals" like inclusion of another repo; need to commit this as well<br />
$ git submodule add -b <branch> <repo> [<folder>]<br />
$ git add <folder> .gitmodules ; + commit ; + push<br />
<br />
$ git submodule update --remote # update the submodules from their respective repo's<br />
</pre><br />
<br />
== Typical Flow ==<br />
<br />
<pre><br />
$ git clone [remote repo] # get a local copy of the repo<br />
$ git checkout -b <branch> # start a new branch<br />
$ git push --set-upstream origin <branch> # set remote as upstream, to be able to push to remote repo<br />
<br />
# do your changes...<br />
$ git add <files> # register changes for commit<br />
$ git commit -m 'message'<br />
$ git push origin <branch> # commit the branch with changes to repo<br />
# repeat as needed...<br />
<br />
# once done with the branch<br />
$ git merge master # pull all changes from "master"<br />
$ git mergetool # resolve conflicts; commit again to branch if needed - `add`, `commit`, `push`<br />
$ git checkout master # switch to master<br />
$ git merge <branch> # this should pass ok; commit to "master" afterwards<br />
$ git branch -d <branch> # clean up - remove the branch<br />
$ git tag <tag> <commit id> # optionally, also tag the commit after merging<br />
</pre><br />
<br />
=== Resolve Conflict ===<br />
<br />
If on <tt>git pull</tt> you get a message:<br />
<pre>error: Your local changes to the following files would be overwritten by merge:<br />
[list of files]<br />
Please, commit your changes or stash them before you can merge.<br />
Aborting</pre><br />
<br />
You need to resolve the conflict manually.<br />
<br />
<pre>git fetch<br />
git add [file]<br />
git commit -m 'resolving'<br />
git merge</pre><br />
<br />
At which point you'll get message:<br />
<br />
<pre>Auto-merging [files]<br />
CONFLICT (content): Merge conflict in [files]<br />
Automatic merge failed; fix conflicts and then commit the result.</pre><br />
<br />
Now you need to manually edit the file(s) to resolve the conflict, and then <tt>add/commit</tt> the resolved file(s).<br />
<br />
== Small Tips ==<br />
<br />
=== Skip Retyping Your Password ===<br />
<br />
<pre>$ git config --global credential.helper store</pre><br />
<br />
Run <tt>git pull</tt> and enter your username and password. These will be stored in <tt>~/.git-credentials</tt> file and reused each time.<br />
<br />
=== Ignoring Files and Folders ===<br />
<br />
<pre>$ touch .gitignore<br />
# edit the file with your fave editor (folders should end with '/'):<br />
folder/<br />
file.txt<br />
<br />
# it's good practice to commit the file as well<br />
$ git add .gitignore<br />
$ git commit -m "Added .gitignore file"<br />
$ git push origin master</pre><br />
<br />
=== Current Branch ===<br />
<br />
<pre>git rev-parse --abbrev-ref HEAD</pre><br />
<br />
You can add this to <tt>.git/config</tt> as alias:<br />
<br />
<pre>[alias]<br />
current = rev-parse --abbrev-ref HEAD</pre><br />
<br />
and then use it as<br />
<br />
<pre>$ git current<br />
> master</pre><br />
<br />
=== Set ''meld'' as your <tt>difftool</tt> and <tt>mergetool</tt> ===<br />
<br />
Edit your <tt>~/.gitconfig</tt> file as follows:<br />
<br />
<pre>[diff]<br />
tool = meld<br />
[difftool]<br />
prompt = false<br />
[difftool "meld"]<br />
cmd = meld "$LOCAL" "$REMOTE"</pre><br />
<br />
Now, running <tt>git difftool</tt> will bring up your dear ol' ''meld''.<br />
<br />
For resolution of merge conflicts, add to your <tt>~/.gitconfig</tt> file, choosing one of the two options:<br />
<br />
<pre>[merge]<br />
tool = meld<br />
[mergetool "meld"]<br />
# the middle file is the partially merged file, and you have to<br />
# make sense of the conflicting portions<br />
cmd = meld "$LOCAL" "$MERGED" "$REMOTE" --output "$MERGED"<br />
# the middle file is the "common ancestor" of the conflicting files,<br />
# and you have to choose bits and pieces to reconstruct the new merged file<br />
cmd = meld "$LOCAL" "$BASE" "$REMOTE" --output "$MERGED"</pre><br />
<br />
First option turns out better in case there was lots to merge, and just bit to resolve.<br />
<br />
The second option is better when pretty much all changes are in conflict, and reconstructing things by hand turns out easier.</div>Adminhttps://wiki.paskvil.com/index.php/Small_Docker_NotesSmall Docker Notes2016-04-28T10:09:57Z<p>Admin: </p>
<hr />
<div>Note - ''vm'' and ''container'' is used below interchangeably.<br />
<br />
== Installation and Verification ==<br />
<br />
<pre> # install Docker<br />
$ curl -fsSL https://get.docker.com/ | sh<br />
<br />
# add your-user to docker group to be able to run docker as non-root<br />
$ usermod -aG docker your-user<br />
# you'll need to log out and in after this, or run all docker commands as root<br />
<br />
# verify the installation<br />
$ docker run hello-world<br />
# this should download image from docker hub, and print "Hello from Docker." message</pre><br />
<br />
As a rule of thumb: <tt>`docker`</tt> is the "super-system" used to manage docker containers (and more). <tt>`docker-compose`</tt> does the same tasks as ''docker'', but only on containers defined in ''docker-compose.yml'' file in current directory. So, <tt>`docker ps -a`</tt> will list all containers, <tt>`docker-compose ps`</tt> will list info about all containers from local ''docker-compose.yml'' file.<br />
<br />
If something goes wrong, check<br />
$ docker-compose logs<br />
or<br />
$ docker logs ''container-id''<br />
<br />
== Stack in Separate Containers ==<br />
<br />
Let's create few containers to hold different parts of the stack:<br />
* nginx<br />
* php-fpm<br />
* mysql<br />
* data volume for mysql<br />
* kyoto tycoon with python<br />
* data volume for kyoto tycoon<br />
<br />
First, create config file for docker compose, called ''docker-compose.yml'' .<br />
<pre>nginx:<br />
# based on latest nginx image<br />
image: nginx:latest<br />
ports:<br />
# map vm's port 80 to local port 8080<br />
- 8080:80</pre><br />
<br />
Now we have defined our first image, let's get it running:<br />
<pre>$ docker-compose up -d<br />
<br />
# you should be able to reach the nginx on localhost:8080 now;<br />
# you can always check running containers, incl. ports map, with<br />
$ docker ps<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
e795e45c3e6b nginx:latest "nginx -g 'daemon off" 24 hours ago Up 24 hours 443/tcp, 0.0.0.0:8080->80/tcp dockertut_nginx_1<br />
<br />
# using container ID or name from `docker ps`, you can get lots of useful info with<br />
$ docker inspect {container-id|name}<br />
</pre><br />
<br />
=== Adding php-fpm ===<br />
<br />
Let's now add VM for php-fpm, and edit the ''docker-compose.yml'' like this:<br />
<pre>nginx:<br />
build:<br />
# don't use image, build based on ./docker/nginx/Dockerfile<br />
./docker/nginx/<br />
ports:<br />
# vm's port 80 will be available as 8080 on localhost<br />
- 8080:80<br />
links:<br />
# has access to 'php' vm<br />
- php<br />
volumes:<br />
# mount current directory as /var/www/html inside the container<br />
- .:/var/www/html<br />
<br />
php:<br />
image: php:5.5-fpm<br />
expose:<br />
# expose port 9000 only to other vm's, not host machine like the 'ports' command<br />
- 9000<br />
volumes:<br />
- .:/var/www/html</pre><br />
<br />
Now the nginx image is based on ''Dockerfile'' in ''./docker/nginx/'' folder, which should look like this:<br />
<br />
<pre>FROM nginx:latest<br />
COPY ./default.conf /etc/nginx/conf.d/default.conf</pre><br />
<br />
See [https://docs.docker.com/engine/reference/builder/ official Dockerfile reference] for details.<br />
<br />
This will build vm from ''nginx:latest'' image, and copy ''./docker/nginx/default.conf'' to ''/etc/nginx/conf.d/'' in the vm. Create this file with the following content:<br />
<pre>server {<br />
listen 80 default_server;<br />
root /var/www/html;<br />
index index.html index.php;<br />
<br />
charset utf-8;<br />
<br />
location / {<br />
try_files $uri $uri/ /index.php?$query_string;<br />
}<br />
<br />
location = /favicon.ico { access_log off; log_not_found off; }<br />
location = /robots.txt { access_log off; log_not_found off; }<br />
<br />
access_log off;<br />
error_log /var/log/nginx/error.log error;<br />
<br />
sendfile off;<br />
<br />
client_max_body_size 100m;<br />
<br />
location ~ \.php$ {<br />
fastcgi_split_path_info ^(.+\.php)(/.+)$;<br />
fastcgi_pass php:9000;<br />
fastcgi_index index.php;<br />
include fastcgi_params;<br />
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;<br />
fastcgi_intercept_errors off;<br />
fastcgi_buffer_size 16k;<br />
fastcgi_buffers 4 16k;<br />
}<br />
<br />
location ~ /\.ht {<br />
deny all;<br />
}<br />
}</pre><br />
<br />
Note that <tt>root /var/www/html;</tt> will actually use the mount of our local directory.<br />
<br />
Also note <tt>fastcgi_pass php:9000;</tt> that uses the link created using ''links:'' key in the ''docker-compose.yml''.<br />
<br />
Now, let's just add ''index.php'' in the root folder (where ''docker-compose.yml'' resides):<br />
<br />
<pre><!DOCTYPE html><br />
<html lang="en"><br />
<head><br />
<meta charset="utf-8"><br />
<title>Hello World!</title><br />
</head><br />
<body><br />
<?php echo "this is php!"; ?> <br />
</body><br />
</html></pre><br />
<br />
Let's start the images; this time, php image should be pulled as well:<br />
<pre>$ docker-compose up -d<br />
Pulling php (php:5.5-fpm)...<br />
5.5-fpm: Pulling from library/php<br />
efd26ecc9548: Already exists <-- the nginx image, already pulled<br />
a3ed95caeb02: Download complete<br />
...</pre><br />
<br />
If you get "''Service 'nginx' needs to be built, but --no-build was passed.''" message, run <tt>`docker-compose build`</tt> and again <tt>`docker-compose up -d`</tt>. Verify all is OK and started:<br />
<pre>$ docker ps<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
fd04b796e34f dockertut_nginx "nginx -g 'daemon off" 4 seconds ago Up 4 seconds 443/tcp, 0.0.0.0:8080->80/tcp dockertut_nginx_1<br />
402fb35b762e php:5.5-fpm "php-fpm" 5 seconds ago Up 4 seconds 9000/tcp dockertut_php_1</pre><br />
<br />
Note that nginx's image name has changed, since we have built a new one.<br />
<br />
Now, when you open ''localhost:8080'', you should get "this is php!" message. Open the ''index.php'' file, and edit it; the change should be immediately reflected. Sweet!<br />
<br />
=== Data Volumes ===<br />
<br />
Both the ''nginx'' and ''php'' vm's now use local folder through the ''volumes'' directive.<br />
<br />
Better yet is to have a separate container; we'll build this container based on some image we're already using (to avoid having yet another image downloaded), but it won't be running - it'll just sit there, collecting data. Change ''docker-compose.yml'' like this:<br />
<br />
<pre>nginx:<br />
build:<br />
./docker/nginx/<br />
ports:<br />
- 8080:80<br />
links:<br />
- php<br />
volumes_from:<br />
- app<br />
<br />
php:<br />
image: php:5.5-fpm<br />
expose:<br />
- 9000<br />
volumes_from:<br />
- app<br />
<br />
app:<br />
# it's good practice to "reuse" some image, not to have yet another one pulled<br />
image: php:5.5-fpm<br />
volumes:<br />
- .:/var/www/html<br />
# container won't run - it'll execute `true` and sit there, collecting data<br />
command: "true"</pre><br />
<br />
Let's check all is OK:<br />
<br />
<pre>$ docker-compose up -d<br />
Creating dockertut_app_1...<br />
Recreating dockertut_php_1...<br />
Recreating dockertut_nginx_1...<br />
<br />
# list all containers (note that 'app' container is not listed in `ps`, since it's not running):<br />
$ docker ps -a<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
af0110364abf dockertut_nginx "nginx -g 'daemon off" 2 minutes ago Up 2 minutes 443/tcp, 0.0.0.0:8080->80/tcp dockertut_nginx_1<br />
94a1c3d317dd php:5.5-fpm "php-fpm" 2 minutes ago Up 2 minutes 9000/tcp dockertut_php_1<br />
861cca283ceb php:5.5-fpm "true" 2 minutes ago Exited (0) 2 minutes ago dockertut_app_1</pre><br />
<br />
and check that ''localhost:8080'' is still responding.<br />
<br />
=== Adding MySQL ===<br />
<br />
Update the 'php' section:<br />
<pre>php:<br />
build:<br />
./docker/php/<br />
expose:<br />
- 9000<br />
links:<br />
- mysql<br />
volumes_from:<br />
- app</pre><br />
<br />
And add the following:<br />
<pre>mysql:<br />
image: mysql:latest<br />
volumes_from:<br />
- mysqldata<br />
# set environment variables <br />
environment:<br />
MYSQL_ROOT_PASSWORD: secret<br />
MYSQL_DATABASE: test<br />
MYSQL_USER: test<br />
MYSQL_PASSWORD: test<br />
<br />
mysqldata:<br />
image: mysql:latest<br />
volumes:<br />
# the /var/lib/mysql will be present in container, and "somewhere" on host<br />
- /var/lib/mysql<br />
command: "true"</pre><br />
<br />
And, create ''./docker/php/Dockerfile'':<br />
<pre>FROM php:5.5-fpm<br />
RUN docker-php-ext-install pdo_mysql</pre><br />
<br />
Now we're all set to <tt>`docker-compose build`</tt> and <tt>`docker-compose up -d`</tt>. Let's check what containers we have now:<br />
<pre>$ docker ps -a<br />
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES<br />
e43265be9680 dockertut_nginx "nginx -g 'daemon off" 16 seconds ago Up 14 seconds 443/tcp, 0.0.0.0:8080->80/tcp dockertut_nginx_1<br />
48cd3b544fb1 dockertut_php "php-fpm" 19 seconds ago Up 17 seconds 9000/tcp dockertut_php_1<br />
0e4f40d9ae8b mysql:latest "docker-entrypoint.sh" 21 seconds ago Up 20 seconds 3306/tcp dockertut_mysql_1<br />
963440e29d89 mysql:latest "docker-entrypoint.sh" 22 seconds ago Exited (0) 20 seconds ago dockertut_mysqldata_1<br />
1458644df104 php:5.5-fpm "true" 4 minutes ago Exited (0) 4 minutes ago dockertut_app_1</pre><br />
<br />
Now that all is set up, we can check where did docker mount the folder on host, that is seen as ''/var/lib/mysql'' in ''mysqldata'' instance:<br />
<pre>$ docker inspect dockertut_mysqldata_1<br />
...<br />
"Mounts": [<br />
{<br />
"Name": "d1352deaff99b25ee68bc53c07150b12b5eb7028b6182f1c97f71f7e21c8ee1d",<br />
"Source": "/var/lib/docker/volumes/d1352deaff99b25ee68bc53c07150b12b5eb7028b6182f1c97f71f7e21c8ee1d/_data",<br />
"Destination": "/var/lib/mysql",<br />
...</pre><br />
The ''Source'' is the folder on host that docker mounted as ''Destination'' in VM.<br />
<br />
If you want all volumes removed when deleting docker container, use<br />
$ docker rm -v ''container-id''<br />
Otherwise, the volumes will stay on your disk, taking up space.<br />
<br />
You can also list all volumes registered by docker:<br />
$ docker volume ls<br />
and use this to remove all "dangling" volumes:<br />
$ docker volume rm $(docker volume ls -qf dangling=true)<br />
<br />
Let's check if mysql is up and running, through terminal. You can open interactive terminal in image like this:<br />
<pre>$ docker exec -it dockertut_mysql_1 /bin/bash<br />
root@0e4f40d9ae8b:/# mysql -u root -p<br />
Enter password: <br />
Welcome to the MySQL monitor. Commands end with ; or \g.<br />
...<br />
mysql> _</pre><br />
Neat, huh? The ''-i'' means to run interactive command, ''-t'' to run in terminal.<br />
<br />
== Manage Docker as non-root User ==<br />
<br />
By creating <tt>docker</tt> group, and adding users to it, you may select users that can manage docker daemon and processes:<br />
<br />
<pre>$ sudo groupadd docker<br />
$ sudo usermod -aG docker $USER</pre><br />
<br />
Afterwards, restart your machine (or, log out and in, and stop and start the docker daemon), and you should be able to control docker without the need for <tt>sudo</tt>.</div>Adminhttps://wiki.paskvil.com/index.php/RedBeanPHP_Cheat_SheetRedBeanPHP Cheat Sheet2016-01-09T23:05:13Z<p>Admin: </p>
<hr />
<div>[http://www.redbeanphp.com/index.php RedBeanPHP site]<br />
<br />
I love RBPHP since it's single file light-weight x-DB and clean ORM. -- my opinion (R) --<br />
<br />
This is by no means complete cheat sheet; many things were left out, see [http://www.redbeanphp.com/index.php full documentation] for all tips and tricks.<br />
<br />
In code samples, PHP 5.4+ array notation is used; replace with array(...) notation of PHP < 5.4.<br />
<br />
== Basics and CRUD ==<br />
<br />
[http://www.redbeanphp.com/index.php?p=/download Download RedBeanPHP]<br />
<br />
<pre>require 'rb.php';<br />
<br />
R::setup(); // creates test SQLite DB in /tmp<br />
R::setup('mysql:host=localhost;dbname=mydatabase', 'user', 'pass'); // MySQL and MariaDB<br />
R::setup('pgsql:host=localhost;dbname=mydatabase', 'user', 'pass'); // PostgreSQL<br />
R::setup('sqlite:/tmp/dbfile.db'); // SQLite<br />
R::setup('cubrid:host=localhost;port=30000;dbname=mydb', 'U','P'); // CUBRID (requires [https://github.com/gabordemooij/RB4Plugins plugin pack])<br />
<br />
$is_connected = R::testConnection();<br />
<br />
// when done (not yet :) - disconnect<br />
R::close();<br />
<br />
// create a new bean, and setting some properties (member and array access supported);<br />
// the bean name has to be lowercase alphabetical;<br />
// properties names have to contain only alphanumeric and underscore<br />
$book = R::dispense('book');<br />
$book->title = 'Learn to Program';<br />
$book['rating'] = 10;<br />
$book['price'] = 29.99;<br />
<br />
// save it - if table does not exist, RBPHP will create it based on added properties;<br />
// if it does exist, RBPHP will update the table to hold any new data, if necessary<br />
$id = R::store($book);<br />
<br />
// MySQL/MariaDB compatible DB's provide insert ID<br />
R::exec('INSERT INTO ... ');<br />
$id = R::getInsertID();<br />
<br />
// reading a bean by ID, or multiple by array of ID's<br />
$book = R::load('book', $id);<br />
$books = R::loadAll('book', $ids);<br />
<br />
// updating a bean<br />
$book->title = 'Learn to fly';<br />
$book->rating = 'good'; // rating will be changed from integer to varchar<br />
$book->published = '2015-02-15'; // column will be added, type 'date'; R::isoDate() and R::isoDateTime() generate current date(time)<br />
R::store($book);<br />
<br />
// deleting a bean or multiple beans<br />
R::trash($book);<br />
R::trashAll($books);<br />
<br />
// delete all beans of given type<br />
R::wipe('book');<br />
<br />
// destroy the whole DB<br />
R::nuke();<br />
<br />
// reload bean from DB<br />
$bean = $bean->fresh();</pre><br />
<br />
=== Fluid and Freeze ===<br />
<br />
In all of the above examples, schema is automatically updated by RBPHP. This is good when developing and testing, but unwanted on production. You can freeze the schema, and force RBPHP to work only with what it has.<br />
<br />
Note that the schema RBPHP generates is typically far from production style - always review the schema generated, and update to your needs; then freeze and go on with stable DB schema.<br />
<br />
<pre>// freeze all tables, no alterations possible<br />
R::freeze(true);<br />
// freeze just some tables, others are fluid<br />
R::freeze(['book','page','book_page']);<br />
// back to (default) fluid mode<br />
R::freeze(false);</pre><br />
<br />
== Select / Query / Transactions ==<br />
<br />
<pre>$book = R::find('book', 'rating > 4');<br />
$books = R::find('book', 'title LIKE ?', ['Learn to%']); // with bindings<br />
$promotions = R::find('person', 'contract_id IN ('.R::genSlots($contractIDs).')', $contractIDs); // R::genSlots() generates ?'s for bindings<br />
$book = R::findOne('book', 'title = ?', ['SQL Dreams']); // returns single bean, not array; NULL if none found<br />
<br />
// find all, no WHERE's<br />
$books = R::findAll('book');<br />
$books = R::findAll('book' , 'ORDER BY title DESC LIMIT 10');<br />
<br />
// all of find(), findOne(), and findAll() support named slots<br />
$books = R::find('book', 'rating < :rating', [':rating' => 2]);<br />
<br />
// using cursors - saves on loading<br />
$collection = R::findCollection('page', 'ORDER BY content ASC LIMIT 5');<br />
while($item = $collection->next())<br />
// process bean $item<br />
<br />
// find with multiple possible values<br />
R::findLike('flower', ['color' => ['yellow', 'blue']], 'ORDER BY color ASC');<br />
<br />
// if the bean does not exist, it's created<br />
$book = R::findOrCreate('book', ['title' => 'my book', 'price' => 50]);<br />
<br />
// raw queries<br />
R::exec('UPDATE page SET title="test" WHERE id = 1');<br />
// results as multidimensional array, and with parameters binding; returns array of arrays<br />
R::getAll('SELECT * FROM page');<br />
R::getAll('SELECT * FROM page WHERE title = :title', [':title' => 'home']);<br />
// single row, as array<br />
R::getRow('SELECT * FROM page WHERE title LIKE ? LIMIT 1', ['%Jazz%']);<br />
// single column, as array<br />
R::getCol('SELECT title FROM page');<br />
// single cell, as value<br />
R::getCell('SELECT title FROM page LIMIT 1');<br />
// use first column as key of the array, and second column as value<br />
R::getAssoc('SELECT id, title FROM page');<br />
<br />
// transactions - store(), trash() etc. throw exceptions, so when you "catch" you can perform rollback:<br />
R::begin();<br />
try<br />
{<br />
R::store( $page );<br />
R::commit();<br />
}<br />
catch( Exception $e )<br />
{<br />
R::rollback();<br />
}<br />
<br />
// transaction closure<br />
R::transaction(function() {<br />
..store some beans..<br />
});</pre><br />
<br />
== Other Helpful Stuff ==<br />
<br />
<pre>// get tables in current DB<br />
$listOfTables = R::inspect();<br />
// get columns of the table<br />
$fields = R::inspect('book');<br />
<br />
// add a new DB connection<br />
R::addDatabase('DB1', 'sqlite:/tmp/d1.db', 'user', 'pass', $frozen);<br />
// use DB connection; to use the one connected to in R::setup(), use 'default' as DB alias<br />
R::selectDatabase('DB1');</pre></div>Adminhttps://wiki.paskvil.com/index.php/Short_Notes_on_RESTful_APIsShort Notes on RESTful APIs2014-06-28T20:23:53Z<p>Admin: </p>
<hr />
<div>'''WIP - check for updates later.'''<br />
<br />
This section is about RESTful Web-based API's.<br />
<br />
Take this section with a pinch of salt. Also, see [[#Disclaimer|Disclaimer]].<br />
<br />
== What is RESTful? ==<br />
<br />
There is no standard for what REST API is (it's an architectural style, not a protocol), but there is a precedent on most of the functional specs. This list presents my preferences regarding the "gray" areas of the spec.<br />
<br />
'''Representational state transfer (REST)''' is a way to create, read, update or delete information on a server using simple HTTP calls.<br />
<br />
REST has strict separation of ''clients'' and ''servers'', by a uniform interface.<br />
<br />
REST is ''stateless'' - each request from any client contains all the information necessary to service the request, and session state is held in the client. Important exception to this rule is authentication session that is stored on servers.<br />
<br />
REST is ''cacheable''. Responses must therefore, implicitly or explicitly, define themselves as cacheable, or not, to prevent clients reusing state or inappropriate data in response to further requests. Well-managed caching partially or completely eliminates some client–server interactions, further improving scalability and performance.<br />
<br />
REST has a ''uniform interface'':<br />
* ''Resources/entities'' are ''uniquely identified'' by URI's.<br/>The resources themselves are conceptually separate from the representations that are returned to the client (JSON, XML, ...).<br />
* Client which holds such representation of a resource (including any metadata attached), it has ''enough information'' to modify or delete this resource.<br />
* Each message includes ''enough information'' to describe how to process the message.<br />
<br />
Web-based REST APIs typically comply to:<br />
* having base [http://en.wikipedia.org/wiki/URI URI], such as http://example.com/resources/ ,<br />
* using [http://en.wikipedia.org/wiki/Internet_media_type Internet media type] for the data; this is often JSON but can be any other valid Internet media type (e.g. XML, Atom, microformats, images, etc.), <br />
* using standard [http://en.wikipedia.org/wiki/HTTP_method HTTP methods] (e.g., GET, HEAD, POST, or DELETE),<br />
* using hypertext links to reference state,<br />
* using hypertext links to reference related resources.<br />
<br />
== Interface Recommendations ==<br />
<br />
* '''''Use POST for both entity creation and update.'''''<br/>Typically, REST API's are expected to use POST for element creation, and PUT for element update. I'm not a big supporter of this, as analogous to majority of programming (scripting) languages where you use the same assignment operator for both variable creation and updating, you should use the same HTTP method.<br/>Also, you'll avoid discussions about what is entity creation and update, esp. in scenarios where you want to allow creation and updates of sub-entities, where either of those can be viewed as update of the entity itself.<br/>Also, note that often REST is not supposed to allow access to sub-entities, but it makes life so much easier, and costs you less traffic.<br />
<br />
* versioning - depends on expected lifetime of the API<br />
<br />
* access to sub-entities<br />
<br />
* pagination and other non-entity specific parameters as GET parameters<br />
<br />
* HEAD on entities; GET on collections is a list; POST on collection creates entity, POST on entity updates; use OPTION for mature stages, on '*' ID<br />
<br />
== Disclaimer ==<br />
<br />
''This list is by no means supposed to represent any "standard view" of REST API's.''<br />
<br />
''It does not represent opinions of any of the companies I work(ed) with.''<br />
<br />
''It is my personal checklist when designing interfaces. Many points depend only on your taste.''<br />
<br />
''Some of the opinions are even "not really REST" but make life easier for either you as backend dev, or for your consumers, or both.''</div>Admin