The Effects of Potential NULL Values in Row Sources During Updates Using an IN Subquery

10 01 2010

January 10, 2010

A couple of years ago the following question appeared in the comp.databases.oracle.server Usenet group:
http://groups.google.com/group/comp.databases.oracle.server/browse_thread/thread/f3c3dea939bf9c12

I’m doing the following update (in Oracle Database 9.2.0.5):

UPDATE table1 t1
SET field1 = NULL
WHERE field2 NOT IN (SELECT /*+ HASH_AJ */ t2.field2 FROM table2 t2)

Instead of doing the ANTIJOIN, The database is performing a FILTER on
table1 by reading table2. Why doesn’t the hint work? For several
tables other that table1 the hint does work.

I did not mention it at the time, but I am not sure if that hint is valid inside the subquery – I think that it needs to appear immediately after the UPDATE keyword.  Taking a guess at the original poster’s problem, I set up a simple test case using Oracle 10.2.0.2:

CREATE TABLE T1 (FIELD1 NUMBER(12), FIELD2 NUMBER(12) NOT NULL);
CREATE TABLE T2 (FIELD1 NUMBER(12), FIELD2 NUMBER(12) NOT NULL);

INSERT INTO
  T1
SELECT
  100,
  ROWNUM*3
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

INSERT INTO
  T2
SELECT
  100,
  ROWNUM*9
FROM
  DUAL
CONNECT BY
  LEVEL<=100000;

COMMIT;
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1');
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2');

I enabled a 10053 trace and then executed the following SQL statement:

UPDATE t1
SET field1 = NULL
WHERE field2 NOT IN (SELECT /*+ HASH_AJ */ t2.field2 FROM t2)

The 10053 trace file showed the following plan:

============
Plan Table
============
-----------------------------------------+-----------------------------------+
| Id  | Operation              | Name    | Rows  | Bytes | Cost  | Time      |
-----------------------------------------+-----------------------------------+
| 0   | UPDATE STATEMENT       |         |       |       |   307 |           |
| 1   |  UPDATE                | T1      |       |       |       |           |
| 2   |   HASH JOIN RIGHT ANTI |         |   208 |  2496 |   307 |  00:00:04 |
| 3   |    TABLE ACCESS FULL   | T2      |   98K |  488K |    44 |  00:00:01 |
| 4   |    TABLE ACCESS FULL   | T1      |   98K |  684K |    86 |  00:00:02 |
-----------------------------------------+-----------------------------------+

That looks like the plan that the OP would like to see.  Now, repeat the test with the following table definitions:

CREATE TABLE T1 (FIELD1 NUMBER(12), FIELD2 NUMBER(12));
CREATE TABLE T2 (FIELD1 NUMBER(12), FIELD2 NUMBER(12));

The 10053 trace file showed the following execution plan when executing the same SQL statement:

============
Plan Table
============
---------------------------------------+-----------------------------------+
| Id  | Operation            | Name    | Rows  | Bytes | Cost  | Time      |
---------------------------------------+-----------------------------------+
| 0   | UPDATE STATEMENT     |         |       |       | 3886K |           |
| 1   |  UPDATE              | T1      |       |       |       |           |
| 2   |   FILTER             |         |       |       |       |           |
| 3   |    TABLE ACCESS FULL | T1      |   98K |  684K |    44 |  00:00:01 |
| 4   |    TABLE ACCESS FULL | T2      |     1 |     5 |    45 |  00:00:01 |
---------------------------------------+-----------------------------------+

My last comment in the post suggested that the problem might be that even though there might not be any NULL values in the columns, the column definitions might permit NULL values, and that alone might restrict the options that are available to the optimizer for re-writing the SQL statement into a more efficient form.

Let’s try another test case.  Let’s create two tables and see how a similar update statement performs with a larger dataset, for instance two tables with 10,000,000 rows each.  The table creation script follows:

CREATE TABLE T1 (COL1 NUMBER(12), COL2 NUMBER(12));
CREATE TABLE T2 (COL1 NUMBER(12), COL2 NUMBER(12));

INSERT INTO
  T1
SELECT
  100,
  ROWNUM*3
FROM
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V1,
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=1000) V2;

INSERT INTO
  T2
SELECT
  100,
  ROWNUM*9
FROM
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=10000) V1,
  (SELECT
    ROWNUM RN
  FROM
    DUAL
  CONNECT BY
    LEVEL<=1000) V2;

COMMIT;

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',ESTIMATE_PERCENT=>NULL)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',ESTIMATE_PERCENT=>NULL)

The test begins on Oracle 11.1.0.7 and later shifts to Oracle 10.2.0.4.  In the test script, we enable a 10053 trace, execute the UPDATE statement, display the execution plan with runtime statistics, move the hint and re-execute the UPDATE statement, remove the hint and specify COL2 IS NOT NULL in the WHERE clause, modify the table columns to add a NOT NULL constraint and try again with no HASH_AJ hint.  The script follows:

SPOOL ANTIJOIN_TEST11.TXT
SET LINESIZE 150
SET PAGESIZE 2000

UPDATE /*+ GATHER_PLAN_STATISTICS */
  T1
SET
  COL1 = NULL
WHERE
  COL2 NOT IN (
    SELECT /*+ HASH_AJ */
      T2.COL2
    FROM
      T2);

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

ROLLBACK;

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'ANTIJOIN_TEST2';

UPDATE /*+ HASH_AJ GATHER_PLAN_STATISTICS */
  T1
SET
  COL1 = NULL
WHERE
  COL2 NOT IN (
    SELECT /*+  */
      T2.COL2
    FROM
      T2);

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

ROLLBACK;

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SESSION SET TRACEFILE_IDENTIFIER = 'ANTIJOIN_TEST3';

UPDATE /*+ GATHER_PLAN_STATISTICS */
  T1
SET
  T1.COL1 = NULL
WHERE
  COL2 NOT IN (
    SELECT /*+  */
      T2.COL2
    FROM
      T2
    WHERE
      T2.COL2 IS NOT NULL)
  AND T1.COL2 IS NOT NULL;

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));

ROLLBACK;

ALTER TABLE T1 MODIFY COL2 NOT NULL;
ALTER TABLE T2 MODIFY COL2 NOT NULL;

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'ANTIJOIN_TEST4';

EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T1',ESTIMATE_PERCENT=>NULL,NO_INVALIDATE=>FALSE)
EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'T2',ESTIMATE_PERCENT=>NULL,NO_INVALIDATE=>FALSE)

ALTER SYSTEM FLUSH BUFFER_CACHE;
ALTER SYSTEM FLUSH BUFFER_CACHE;

ALTER SESSION SET TRACEFILE_IDENTIFIER = 'ANTIJOIN_TEST5';
 
UPDATE /*+ GATHER_PLAN_STATISTICS */
  T1
SET
  COL1 = NULL
WHERE
  COL2 NOT IN (
    SELECT /*+  */
      T2.COL2
    FROM
      T2);
 
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'ALLSTATS LAST'));
 
ROLLBACK;

How did we do on Oracle 11.1.0.7?

The first update statement had an execution plan like this:

UPDATE /*+ GATHER_PLAN_STATISTICS */   T1 SET   COL1 = NULL WHERE                                                                          
COL2 NOT IN (     SELECT /*+ HASH_AJ */       T2.COL2     FROM       T2)                                                                   

Plan hash value: 875068713   

--------------------------------------------------------------------------------------------------------------------------------------------------      
| Id  | Operation                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|      
--------------------------------------------------------------------------------------------------------------------------------------------------      
|   0 | UPDATE STATEMENT         |      |      1 |        |      0 |00:04:07.39 |    6898K|  85474 |  36766 |       |       |          |         |      
|   1 |  UPDATE                  | T1   |      1 |        |      0 |00:04:07.39 |    6898K|  85474 |  36766 |       |       |          |         |      
|*  2 |   HASH JOIN RIGHT ANTI NA|      |      1 |   9999K|   6666K|00:00:30.58 |   38576 |  75353 |  36766 |   196M|    10M|   46M (1)|     298K|      
|   3 |    TABLE ACCESS FULL     | T2   |      1 |     10M|     10M|00:00:10.03 |   19288 |  19278 |      0 |       |       |          |         |      
|   4 |    TABLE ACCESS FULL     | T1   |      1 |     10M|     10M|00:00:00.12 |   19288 |  19278 |      0 |       |       |          |         |      
--------------------------------------------------------------------------------------------------------------------------------------------------      

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("COL2"="T2"."COL2")                 

According to the DBMS_XPLAN output, even without the table columns declared as NOT NULL, Oracle was able to use a special form of a NULL aware hash join anti – the query completed in just over 4 minutes.  According to the 10053 trace, only the GATHER_PLAN_STATISTICS hint was recognized.  The final query after transformation follows:

SELECT 0 FROM "TESTUSER"."T2" "T2","TESTUSER"."T1" "T1" WHERE "T1"."COL2"="T2"."COL2"

Moving the HASH_AJ hint so that it is next to the GATHER_PLAN_STATISTICS hint caused the optimizer to recognize the hint, but the 10053 trace file showed that the hint was not used.  The execution plan was identical to the execution plan above, although the actual time and number of consistent/current mode gets differed slightly.

Specifying the IS NOT NULL restriction in the WHERE clause did change the execution plan to a standard HASH JOIN ANTI:

UPDATE /*+ GATHER_PLAN_STATISTICS */   T1 SET   T1.COL1 = NULL WHERE                                                                       
COL2 NOT IN (     SELECT /*+  */       T2.COL2     FROM       T2                                                                           
WHERE       T2.COL2 IS NOT NULL)   AND T1.COL2 IS NOT NULL                                                                                 

Plan hash value: 2180616727  

-----------------------------------------------------------------------------------------------------------------------------------------------         
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|         
-----------------------------------------------------------------------------------------------------------------------------------------------         
|   0 | UPDATE STATEMENT      |      |      1 |        |      0 |00:04:41.32 |    6926K|  84797 |  36766 |       |       |          |         |         
|   1 |  UPDATE               | T1   |      1 |        |      0 |00:04:41.32 |    6926K|  84797 |  36766 |       |       |          |         |         
|*  2 |   HASH JOIN RIGHT ANTI|      |      1 |   9999K|   6666K|00:00:57.74 |   38576 |  75353 |  36766 |   196M|    10M|   46M (1)|     298K|         
|*  3 |    TABLE ACCESS FULL  | T2   |      1 |     10M|     10M|00:00:00.09 |   19288 |  19278 |      0 |       |       |          |         |         
|*  4 |    TABLE ACCESS FULL  | T1   |      1 |     10M|     10M|00:00:00.12 |   19288 |  19278 |      0 |       |       |          |         |         
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("COL2"="T2"."COL2")                 
   3 - filter("T2"."COL2" IS NOT NULL)            
   4 - filter("T1"."COL2" IS NOT NULL)

The number of consistent gets increased slightly, and so did the actual run time.

Let’s take a look at the statistics from the plan when the table columns have the NOT NULL constraint:

-----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem | Used-Tmp|
-----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | UPDATE STATEMENT      |      |      1 |        |      0 |00:04:47.51 |    6930K|  84435 |  36766 |       |       |          |         |
|   1 |  UPDATE               | T1   |      1 |        |      0 |00:04:47.51 |    6930K|  84435 |  36766 |       |       |          |         |
|*  2 |   HASH JOIN RIGHT ANTI|      |      1 |   9999K|   6666K|00:00:29.61 |   38576 |  75353 |  36766 |   196M|    10M|   46M (1)|     298K|
|   3 |    TABLE ACCESS FULL  | T2   |      1 |     10M|     10M|00:00:00.01 |   19288 |  19278 |      0 |       |       |          |         |
|   4 |    TABLE ACCESS FULL  | T1   |      1 |     10M|     10M|00:00:00.12 |   19288 |  19278 |      0 |       |       |          |         |
-----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("COL2"="T2"."COL2")      

The same execution plan was used again, but the number of consistent gets and actual time increased even more.  In case you are wondering, the 10053 trace file shows that the final query after transformation is this:

SELECT 0 FROM "TESTUSER"."T2" "T2","TESTUSER"."T1" "T1" WHERE "T1"."COL2"="T2"."COL2"

—-

So, Oracle 11.1.0.7 is able to take advantage of the NULL aware “HASH JOIN RIGHT ANTI NA” operation.  Does this help?  Well, let’s try the test again on Oracle 10.2.0.4 running on the same computer.  The execution plan from the 10053 trace might look something like this for the first SQL statement with the hint in the wrong location:

UPDATE /*+ GATHER_PLAN_STATISTICS */   T1 SET   COL1 = NULL WHERE   COL2 NOT
IN (     SELECT /*+ HASH_AJ */       T2.COL2     FROM       T2)

============
Plan Table
============
---------------------------------------+-----------------------------------+
| Id  | Operation            | Name    | Rows  | Bytes | Cost  | Time      |
---------------------------------------+-----------------------------------+
| 0   | UPDATE STATEMENT     |         |       |       |   50G |           |
| 1   |  UPDATE              | T1      |       |       |       |           |
| 2   |   FILTER             |         |       |       |       |           |
| 3   |    TABLE ACCESS FULL | T1      | 9766K |   76M |  5332 |  00:01:04 |
| 4   |    TABLE ACCESS FULL | T2      |     1 |     6 |  5348 |  00:01:05 |
---------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - filter( IS NULL)
4 - filter(LNNVL("T2"."COL2"<>:B1))

Note that the execution plan includes a FILTER operation on ID 2 of the plan – that is what the OP was seeing.  The final query after transformation looked something like this:

SELECT 0 FROM "TESTUSER"."T1" "SYS_ALIAS_1" WHERE  NOT EXISTS (SELECT /*+ HASH_AJ */ 0 FROM "TESTUSER"."T2" "T2" WHERE LNNVL("T2"."COL2"<>"SYS_ALIAS_1"."COL2"))

So, what did the execution plan look like with the statistics?  I don’t know – I killed it after a couple hours.  When I killed it, the SQL statement had processed 114,905,284 consistent or current mode gets and burned through 9,763.26 seconds of CPU time.

That just means we need a more powerful server, right?  OK, let’s try another experiment on a more powerful server.  This time, we will use a 10046 trace at level 8 using the query with the IS NOT NULL condition in the WHERE clause and gradually decrease the OPTIMIZER_FEATURES_ENABLED parameter from 11.1.0.7 to 10.2.0.1 to 9.2.0 to 8.1.7.

11.1.0.7:

Cursor 7   Ver 1   Parse at 0.000000  Similar Cnt 1
|PARSEs       1|CPU S    0.000000|CLOCK S    0.000000|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      1|
|EXECs        1|CPU S   49.077915|CLOCK S   64.402548|ROWs  6666667|PHY RD BLKs     27342|CON RD BLKs (Mem)     39170|CUR RD BLKs (Mem)   6816007|SHARED POOL MISs      0|
|FETCHs       0|CPU S    0.000000|CLOCK S    0.000000|ROWs        0|PHY RD BLKs         0|CON RD BLKs (Mem)         0|CUR RD BLKs (Mem)         0|SHARED POOL MISs      0|

UPDATE
  T1
SET
  T1.COL1 = NULL
WHERE
  COL2 NOT IN (
    SELECT /*+  */
      T2.COL2
    FROM
      T2
    WHERE
      T2.COL2 IS NOT NULL)
  AND T1.COL2 IS NOT NULL

       (Rows 0)   UPDATE  T1 (cr=39170 pr=27342 pw=27311 time=0 us)
 (Rows 6666667)    HASH JOIN RIGHT ANTI (cr=38576 pr=27342 pw=27311 time=93577 us cost=28657 size=28 card=2)
(Rows 10000000)     TABLE ACCESS FULL T2 (cr=19288 pr=0 pw=0 time=0 us cost=5290 size=60000000 card=10000000)
(Rows 10000000)     TABLE ACCESS FULL T1 (cr=19288 pr=0 pw=0 time=15717 us cost=5290 size=80000000 card=10000000)

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  direct path write temp                        881        0.14          1.60
  direct path read temp                         882        0.40         12.03
  latch: checkpoint queue latch                   4        0.00          0.00
  log file switch completion                      8        0.54          1.01
  SQL*Net message to client                       1        0.00          0.00
  SQL*Net message from client                     1        0.00          0.00

64.4 seconds of total run time with 49.1 seconds of CPU time – I guess that is a bit faster.  Quite a few consistent gets and current mode gets.

10.2.0.1 and 9.2.0.1 generated the same execution plan as the above, so the statistics should also be about the same.

8.1.7:

Well, we ran into a problem with this one – I killed it after a bit over an hour.  The TYPICAL execution plan looked like this:

SQL_ID  11apc6grf5fph, child number 0
-------------------------------------
UPDATE   T1 SET   T1.COL1 = NULL WHERE   COL2 NOT IN (     SELECT /*+ 
*/       T2.COL2     FROM       T2     WHERE       T2.COL2 IS NOT NULL)
  AND T1.COL2 IS NOT NULL

Plan hash value: 3288325718

---------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost  | Inst   |
---------------------------------------------------------------------
|   0 | UPDATE STATEMENT    |      |       |       |  2926 |        |
|   1 |  UPDATE             | T1   |       |       |       |   OR11 |
|*  2 |   FILTER            |      |       |       |       |        |
|*  3 |    TABLE ACCESS FULL| T1   |   500K|  3906K|  2926 |   OR11 |
|*  4 |    TABLE ACCESS FULL| T2   |     1 |     6 |  2926 |   OR11 |
---------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NULL)
   3 - filter("T1"."COL2" IS NOT NULL)
   4 - filter(("T2"."COL2"=:B1 AND "T2"."COL2" IS NOT NULL))

That darn FILTER on ID 2 is back again, like what the OP saw.  What were the execution statistics at that point?  4853.41 seconds of CPU time and 212,520,861 consistent or current mode gets.  I know, let’s throw some more hardware at it, because that is the cheap and completely safe, risk-free solution, rather than doing a root cause analysis of the problem (that has to be true, I read it on the Internet – more than once :-) ).


Actions

Information

3 responses

10 01 2010
Greg Rahn

My last comment in the post suggested that the problem might be that even though there might not be any NULL values in the columns, the column definitions might permit NULL values, and that alone might restrict the options that are available to the optimizer for re-writing the SQL statement into a more efficient form.

This is true. If columns can be determined to be NOT NULL there are more query transformations that can take place. When using NOT IN() it leaves NULL to be a possibility unless the column is NOT NULL or a AND NOT NULL predicate is used. Execution times will generally be worse if the NOT NULL predicate can not be applied (implicitly or explicitly) and no NULL values exist. I’ve seen this countless times.

10 01 2010
Narendra

Charles,

Nice one. Thanks.
But I am actually bit confused with the results in 11.1.0.7.
Why were the number of consistent gets more when we did “right” thing (i.e. either to use IS NOT NULL predicate or define a NOT NULL constraint) than those when we did not bother to do the “right” thing (i.e. no NOT NULL constraint or IS NOT NULL condition) ? Does this mean from 11g onwards, it is better to not use NOT NULL constraint or IS NOT NULL predicate ?

10 01 2010
Charles Hooper

I have not spent a lot of time analyzing why the UPDATE statement completed faster and with fewer consistent gets when the “HASH JOIN RIGHT ANTI NA” operation appeared in the execution plan, compared to the “HASH JOIN RIGHT ANTI” operation with either the NOT NULL constraint on the table, or specified in the WHERE clause (I mentioned it hoping that someone would be curious, and try to test to see why it happened).

Keep in mind that this is just one test case – you may find that if you change the order of the UPDATE statements so that the test statement with the IS NOT NULL predicate specified in the WHERE clause is executed first, that it then becomes the most efficient method (it could be that the previous rolled back UPDATEs changed the high watermark of the tables). It could be that if I ran the first set of tests on the other server (the one that performed 212,520,861 consistent or current mode gets in 4,853.41, rather than the one that only performed 114,905,284 consistent or current mode gets in 9,763.26 seconds) that the results would be about even or in favor of the IS NOT NULL constraints. It could be that if I had not changed the STATISTICS_LEVEL from TYPICAL to ALL that the elapsed time might have been identical. What is great about well constructed test cases is that anyone can take the test case, run it on their system, make small modifications to the test case, and see how the performance changes.

Keep in mind that this test case only looked at the NOT IN type of subquery for an UPDATE statement (not that in one case the optimizer changed it to NOT EXISTS). The results could be completely different for DELETE or SELECT statements. As Greg Rahn confirmed, the potential for NULL values limits the options of the cost-based optimizer to find more efficient execution plans (and that can impact the performance). You might ask, what if one (or both) of the columns were indexed – how much that impact the test, since by default NULL values are not included in single column B*Tree indexes (and there is the potential for NULLs)? With the above test case, you could make a couple modifications, maybe add a more restrictive WHERE clause, to see how the potential for NULL values affects the optimizer.

Then, of course, there is always the risk that someone will set OPTIMIZER_FEATURES_ENABLE to a value less than 11.1.0.6, and suddenly, rather than executing in 4 minutes, it maxes out one of the CPUs for 3+ hours, and is not event 10% finished when the query is forced to be killed.

I would suggest that if a column should never include NULL values, that it should have a constraint that prevents NULL values from being added in the column. Doing so will help the query optimizer, and reduce the need to add seemingly unneeded IS NOT NULL predicates to the WHERE clause.

Anyone else have an opinion?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Follow

Get every new post delivered to your Inbox.

Join 144 other followers

%d bloggers like this: