Welcome to django-postgres-metrics’s documentation!¶
Contributing¶
Issuing a new Release¶
Install
bumpversion
with:$ pip install git+ssh://git@github.com:MarkusH/bumpversion.git@sign#egg=bumpversion
Install
twine
with:$ pip install twine
Determine next version number from the
changelog.rst
(ensuring to follow SemVer)Ensure
changelog.rst
is representative with new version number and commit possible changes.Update the version number with
bumpversion
:$ bumpversion $part
(instead of
$part
you can usemajor
,minor
, orpatch
.git push --tags origin master
Check for TravisCI to complete. If TravisCI fails due to code errors, go back to the start and bump the
$part
withpatch
Build artifacts with:
$ python setup.py sdist bdist_wheel
Upload artifacts with:
$ twine upload -s dist/laterpay*$newver*
Add likely new version to at the top of
changelog.rst
postgres_metrics¶
postgres_metrics package¶
Submodules¶
postgres_metrics.apps module¶
postgres_metrics.metrics module¶
-
class
postgres_metrics.metrics.
CacheHitsMetric
[source]¶ Bases:
postgres_metrics.metrics.Metric
The typical rule for most applications is that only a fraction of its data is regularly accessed. As with many other things data can tend to follow the 80/20 rule with 20% of your data accounting for 80% of the reads and often times its higher than this. Postgres itself actually tracks access patterns of your data and will on its own keep frequently accessed data in cache. Generally you want your database to have a cache hit rate of about 99%.
(Source: http://www.craigkerstiens.com/2012/10/01/understanding-postgres-performance/)
-
label
= 'Cache Hits'¶
-
slug
= 'cache-hits'¶
-
sql
= "\n WITH cache AS (\n SELECT\n sum(heap_blks_read) heap_read,\n sum(heap_blks_hit) heap_hit,\n sum(heap_blks_hit) + sum(heap_blks_read) heap_sum\n FROM\n pg_statio_user_tables\n ) SELECT\n heap_read,\n heap_hit,\n CASE\n WHEN heap_sum = 0 THEN 'N/A'\n ELSE (heap_hit / heap_sum)::text\n END ratio\n FROM\n cache\n ;\n "¶
-
-
class
postgres_metrics.metrics.
IndexUsageMetric
[source]¶ Bases:
postgres_metrics.metrics.Metric
While there is no perfect answer, if you’re not somewhere around 99% on any table over 10,000 rows you may want to consider adding an index. When examining where to add an index you should look at what kind of queries you’re running. Generally you’ll want to add indexes where you’re looking up by some other id or on values that you’re commonly filtering on such as created_at fields.
(Source: http://www.craigkerstiens.com/2012/10/01/understanding-postgres-performance/)
-
label
= 'Index Usage'¶
-
slug
= 'index-usage'¶
-
sql
= '\n SELECT\n relname,\n 100 * idx_scan / (seq_scan + idx_scan) percent_of_times_index_used,\n n_live_tup rows_in_table\n FROM\n pg_stat_user_tables\n WHERE\n seq_scan + idx_scan > 0\n ORDER BY\n percent_of_times_index_used DESC;\n '¶
-
postgres_metrics.urls module¶
-
postgres_metrics.urls.
re_path
(route, view, kwargs=None, name=None, *, Pattern=<class 'django.urls.resolvers.RegexPattern'>)¶