Logging Exceptions To Your SQLAlchemy Database¶
So you'd like to log to your database, rather than a file. Well, here's a brief rundown of exactly how you'd do that.
First we need to define a Log model for SQLAlchemy (do this in
myapp.models
):
1from sqlalchemy import Column
2from sqlalchemy.types import DateTime, Integer, String
3from sqlalchemy.sql import func
4from sqlalchemy.ext.declarative import declarative_base
5
6Base = declarative_base()
7
8class Log(Base):
9 __tablename__ = 'logs'
10 id = Column(Integer, primary_key=True) # auto incrementing
11 logger = Column(String) # the name of the logger. (e.g. myapp.views)
12 level = Column(String) # info, debug, or error?
13 trace = Column(String) # the full traceback printout
14 msg = Column(String) # any custom log you may have included
15 created_at = Column(DateTime, default=func.now()) # the current timestamp
16
17 def __init__(self, logger=None, level=None, trace=None, msg=None):
18 self.logger = logger
19 self.level = level
20 self.trace = trace
21 self.msg = msg
22
23 def __unicode__(self):
24 return self.__repr__()
25
26 def __repr__(self):
27 return "<Log: %s - %s>" % (self.created_at.strftime('%m/%d/%Y-%H:%M:%S'), self.msg[:50])
Not too much exciting is occuring here. We've simply created a new table named 'logs'.
Before we get into how we use this table for good, here's a quick review
of how logging
works in a script:
1# http://docs.python.org/howto/logging.html#configuring-logging
2import logging
3
4# create logger
5logger = logging.getLogger('simple_example')
6logger.setLevel(logging.DEBUG)
7
8# create console handler and set level to debug
9ch = logging.StreamHandler()
10ch.setLevel(logging.DEBUG)
11
12# create formatter
13formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
14
15# add formatter to ch
16ch.setFormatter(formatter)
17
18# add ch to logger
19logger.addHandler(ch)
20
21# 'application' code
22logger.debug('debug message')
23logger.info('info message')
24logger.warn('warn message')
25logger.error('error message')
26logger.critical('critical message')
What you should gain from the above intro is that your handler
uses a formatter
and does the heavy lifting of executing the
output of the logging.LogRecord
. The output actually comes
from logging.Handler.emit
, a method we will now override as
we create our SQLAlchemyHandler.
Let's subclass Handler now (put this in myapp.handlers
):
1import logging
2import traceback
3
4import transaction
5
6from models import Log, DBSession
7
8class SQLAlchemyHandler(logging.Handler):
9 # A very basic logger that commits a LogRecord to the SQL Db
10 def emit(self, record):
11 trace = None
12 exc = record.__dict__['exc_info']
13 if exc:
14 trace = traceback.format_exc()
15 log = Log(
16 logger=record.__dict__['name'],
17 level=record.__dict__['levelname'],
18 trace=trace,
19 msg=record.__dict__['msg'],)
20 DBSession.add(log)
21 transaction.commit()
For a little more depth, logging.LogRecord
, for which record
is an instance, contains all it's nifty log information in it's
__dict__
attribute.
Now, we need to add this logging handler to our .ini configuration files. Before we add this, our production.ini file should contain something like:
1[loggers]
2keys = root, myapp, sqlalchemy
3
4[handlers]
5keys = console
6
7[formatters]
8keys = generic
9
10[logger_root]
11level = WARN
12handlers = console
13
14[logger_myapp]
15level = WARN
16handlers =
17qualname = myapp
18
19[logger_sqlalchemy]
20level = WARN
21handlers =
22qualname = sqlalchemy.engine
23# "level = INFO" logs SQL queries.
24# "level = DEBUG" logs SQL queries and results.
25# "level = WARN" logs neither. (Recommended for production systems.)
26
27[handler_console]
28class = StreamHandler
29args = (sys.stderr,)
30level = NOTSET
31formatter = generic
32
33[formatter_generic]
34format = %(asctime)s %(levelname)-5.5s [%(name)s][%(threadName)s] %(message)s
We must add our SQLAlchemyHandler
to the mix. So make the following
changes to your production.ini file.
1[handlers]
2keys = console, sqlalchemy
3
4[logger_myapp]
5level = DEBUG
6handlers = sqlalchemy
7qualname = myapp
8
9[handler_sqlalchemy]
10class = myapp.handlers.SQLAlchemyHandler
11args = ()
12level = NOTSET
13formatter = generic
The changes we made simply allow Paster to recognize a new handler -
sqlalchemy
, located at [handler_sqlalchemy]
. Most everything
else about this configuration should be straightforward. If anything
is still baffling, then use this as a good opportunity to read the
Python logging
documentation.
Below is an example of how you might use the logger in myapp.views
:
1import logging
2from pyramid.view import view_config
3from pyramid.response import Response
4
5log = logging.getLogger(__name__)
6
7@view_config(route_name='home')
8def root(request):
9 log.debug('exception impending!')
10 try:
11 1/0
12 except:
13 log.exception('1/0 error')
14 log.info('test complete')
15 return Response("test complete!")
When this view code is executed, you'll see up to three (depending on the level of logging you allow in your configuation file) records!
For more power, match this up with pyramid_exclog at https://docs.pylonsproject.org/projects/pyramid_exclog/en/latest/