Understanding heroku postgres log statements and common errors heroku dev center

Besides seeing system-level Postgres activity, these logs are also useful for understanding your application’s use of Postgres and for diagnosing common errors. This article lists common log statements, their purpose, and any action that should be taken. LOG: duration: 3.565 s … [12-1] u8akd9ajka [BRONZE] LOG: duration: 3.847 s statement: SELECT "articles".* FROM "articles"…

Although this log is emitted from Postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to Postgres, and did not clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.


This error can happen as a result of one of several intermittent network connectivity issues. If you are seeing this only intermittently, Connect will detect the error and retry the sync operation shortly thereafter. If you are seeing this error message, you may wish to reach out to Heroku support as the underlying cause may require an engineer to resolve. FATAL: too many connections for role FATAL: too many connections for role "[role name]"

This occurs on Hobby Tier (hobby-dev and hobby-basic) plans, which have a max connection limit of 20 per user. To resolve this error, close some connections to your database by stopping background workers, reducing the number of dynos, or restarting your application in case it has created connection leaks over time. A discussion on handling connections in a Rails application can be found here. FATAL: could not receive data … FATAL: could not receive data from WAL stream: SSL error: sslv3 alert unexpected message

This message indicates a backend connection was terminated. This can happen when a user issues pg:kill from the command line client, or similarly runs SELECT pg_cancel_backend(pid); from a psql session. FATAL: remaining connection slots are reserved for non-replication superuser connections FATAL: remaining connection slots are reserved for non-replication superuser connections

Each database plan has a maximum allowed number of connections available, which varies by plan. This message indicates you have reached the maximum number allowed for your applications, and remaining connections are reserved for super user access (restricted to Heroku Postgres staff). See Heroku Postgres Production Tier Technical Characterization for details on connection limits for a given plan. FATAL: no pg_hba.conf entry for host "…”, user “u…”, database “d…”, SSL off

Heroku Postgres hobby tier databases have row limits enforced. When you are over your row limit and attempt to insert data you will see this error. Upgrade to a production tier database to remove this constraint or reduce the number of total rows. PGError: operator does not exist PGError: ERROR: operator does not exist: character varying = integer

• Abrupt client (application side) disconnections. This can happen for many reasons, from your app crashing, to transient network availability. When your app tries to issue a query again against Postgres, the connection is just gone, leading to a crash. When Heroku detects a crash, we kill that dyno and start a new one, which re-establishes the connection.

Out of memory (OOM) errors typically happen when the server that is running the database cannot allocate any more memory to the database connections or its cache. Any number of issues could manifest themselves as an OOM error. Before bumping up your Heroku Postgres plan to a higher level, explore all areas of diagnosing the problem. Extremely complex querying

Joins, sorting, and hash-based operations will put pressure on the working memory in Postgres. For any given database, the number of concurrent connections can exacerbate this problem. Sort operations are defined by ORDER BY, DISTINCT, and merge joins. Hash-based operations are typically a result of processing IN subqueries, hash-based aggregations, as well as hash joins. The more complex the query, the higher the probability that the query will use many more of these operations. Using the explain plan of your query or the Heroku Postgres expensive queries functionality is a good first step to understanding the bottlenecks of your queries. Ruby, ActiveRecord and prepared statements

Prepared statements, in and of themselves, are not a bad mechanism to use when working with Postgres. The benefit of using them is that the database can execute the statement with extremely high efficiency. Unfortunately, many ORMs as part of popular web frameworks, namely ActiveRecord in the Ruby on Rails framework, do not construct prepared statements effectively. If a query that is constructed can contain any number of parameters, multiple prepared statements will be created for what is the same logical query. For example, let’s assume that our application allows its users to select products from a product table by id. If customer one selects two products, ActiveRecord would define a query and create a prepared statement: SELECT * FROM products WHERE id in ($1, $2);

The problem with this code path in the application is our customers have some variability in how they can select data for the IN clause. ActiveRecord will parameterize the query based on the number of items for the IN clause. This can result in too many prepared statements getting cached, ultimately utilizing too much memory on the system. If this is happening to your application, consider disabling prepared statements. Rails 4.1+

While Heroku Postgres has connection limits, based on the plan type, they’re meant to be a guideline. Each connection within Postgres takes up some RAM and if too many are created at any given time, that also can cause problems for the database. It is recommended that if you run into this situation, a production-grade connection pooler should be used to prevent the pressure of opening and closing connections on RAM. Database plan is too small

In some cases, the database plan selected is too small for the workloads placed on it. If all other avenues for clearing up OOM errors have been explored, the primary database might need to be upgraded up to the next plan level. Very well engineered applications will serve most of the query results from the cache that Postgres manages. Generally, if the cache hit ratio starts to suffer, based on the queries being executed, then a higher plan should be considered.