Monday, October 17, 2016

The Value in Project Valhalla

I have been interested in the progress of Project Valhalla for quite a while, but Brian Goetz's recent message "Project Valhalla: Goals" has raised my level of interest. I have frequently enjoyed Goetz's writing because he combines two characteristics I want most in a technical author: he knows the subjects he writes about much more deeply than what he is writing about, but is also able to present these concepts at a level approachable to the rest of us who lack his his depth of knowledge in that area. The mail message "Project Valhalla: Goals" is significant in several ways and is highly approachable; it should be read directly by anyone interested in why Project Valhalla is so exciting. Although I recommend reading the original, approachable message, I collect some of my observations from reading this message in this post.

During my software developer career, regardless of the programming language I am using, I have typically found that most software development consists of a series of trade-offs. It is very common to run into areas where the best performing code is less readable than slower code. This trade-off is, in fact, what leads to premature optimization. The danger of premature optimization is that it is "premature" because it the performance gain achieved by the less readable code is actually not needed and so one is effectively exchanging "more dangerous" or "more expensive" code for an unneeded performance benefit.

In Java, a common trade-off of this type is when using objects. Objects can often be easier to use and are required for use with the highly used standard Java collections, but the overhead of objects can be costly in terms of memory and overhead. Goetz points out in "Project Valhalla: Goals" that Project Valhalla has the potential to be one of those relatively rare situations in which performance can be achieved along with "safety, abstraction, encapsulation, expressiveness, [and] maintainability."

Goetz provides a concise summary of the costs associated with objects and maintaining object identity. From this brief explanation of the drawbacks of maintaining object identity in cases in which it is not needed, Goetz moves to the now expected description of how values types for Java could address this issue. In addition to briefly describing advantages of value types, Goetz also provides some alternate names and phrases for value types that might help to understand them better:

  • "Aggregates, like Java classes, that renounce their identity"
  • "Codes like a class, works like an int"
  • "Faster Objects"
  • "Programmable Primitives"
  • "Cheaper objects"
  • "Richer primitives"

Regarding value types, Goetz writes, "We need not force users to choose between abstraction/encapsulation/safety and performance. We can have both." It's not everyday that we can have our cake and eat it too.

In "Project Valhalla: Goals", Goetz also discusses the goal of "extend[ing] generics to allow abstraction over all types, including primitives, values, and even void." He uses examples of the JDK needing to supply multiple methods in its APIs to cover items that are not reference types but must be supported by the APIs because "generics are currently limited to abstracting only over reference types." Goetz points out that even when autoboxing allows a primitive to be used in the API expecting the reference type corresponding to the primitive (such as an int argument being autoboxed to an Integer reference), this boxing comes at a performance cost. With these explanations of the issues in place, Goetz summarizes, "Everyone would be better off if we could write a generic class or method once -- and abstract over all the possible data types, not just reference types." He adds, "Being able to write things once ... means simpler, more expressive, more regular, more testable, more composable libraries without giving up performance when dealing with primitives and values, as boxing does today."

Goetz concludes "Project Valhalla: Goals" with the statement, "Valhalla may be motivated by performance considerations, but a better way to view it as enhancing abstraction, encapsulation, safety, expressiveness, and maintainability -- 'without' giving up performance." I really like Project Valhalla from this perspective: we can get many of the benefits of using objects and reference types while not giving up the performance benefits of using primitives.

Project Valhalla: Goals provides much to think about in a concise and approachable manner. Reading this has increased my interest in Project Valhalla's future and I hope we can see it manifest in the JDK.

Wednesday, October 12, 2016

Unintentionally Obfuscated: Dealing with Challenging Legacy Code

I recently had to deal with some legacy code with significant performance issues. It was more challenging than I thought it should have been to fix the issues and I was reminded throughout the process of how relatively simple good practices could have made the code easier to fix and might have even helped avoid the troublesome code from being written in the first place. I collect some of those reminders and reaffirming observations in this post.

I was having difficulty following what the code was doing and realized that some of the difficulty was due to the naming of the methods and parameters. Several methods had names that implied the method had a single major responsibility when it actually had multiple responsibilities. There are advantages to a method having only a single responsibility, but even methods with multiple responsibilities are more readable and understandable when their name appropriately communicates the multiple responsibilities. As I changed the method names to reflect their multiple responsibilities, I was able to more clearly understand all the things they were doing and, in some cases, break the methods up into separate methods that each had a single responsibility and together implemented the multiple responsibilities of a single method.

Another issue that reduced readability of the code and added to confusion was out-of-date comments. These comments were likely correct at one point, but had become obsolete or incorrect and often described something quite different than what the code was actually doing. As I removed some of these comments and corrected and updated others, the code was easier to read because the code and comments were in harmony. I tried to only leave comments that are necessary because the code itself does not provide an easy way to express the behaviors.

Another common characteristic of this challenging piece of legacy code was the use of several overloaded methods. The types of the parameters in some cases were exactly the same, just in different orders to allow the methods to be overloaded. The methods were far clearer in terms of intent and in terms of differentiation between them when I broke the "clever" use of overloaded methods and named the methods more appropriately for what they were doing. Part of this was changing the method names to reflect the entire reason for the particular ordering of the parameters in the first place. For example (these are conceptual illustrations rather than the methods I worked on), if the methods were to covert an instance of ClassA to an instance of ClassB and vice versa, I changed the method names and signatures from convert(A, B) and convert(B, A) to convertAToB(A, B) and convertBToA(B, A). This made the calling code much more readable because one did not need to know that the local variables passed to these respective methods were of type A or of type B to know which method was being called.

The following summarizes the relatively small issues that together made this a difficult piece of code to understand and fix:

  • Methods doing too many things (too many responsibilities)
  • Poorly named methods, arguments, and local variables that insufficiently described the constructs or even incorrectly described them
    • Overloaded methods with same arguments but very different functionality added to confusion.
    • A colleague remarked that it would have been better if the methods and variables had been named something nonsensical like "fred" and "bob" than to have their incorrect names that seemed explanatory but ended up being misleading.
  • Obsolete and outdated comments that were now misleading or blatantly incorrect

Each of these issues by itself was a small thing. When combined in the same area of code, however, they compounded the negative effect of each other and together made the code difficult to understand and therefore difficult to change or fix. Even with a powerful IDE and unit tests, this code was difficult to work with. Making the code more readable was a more effective tool in diagnosing the issue and fixing it than even the powerful IDE and unit tests. More importantly, had this code been written in a more readable fashion in the first place, it would more likely have been implemented correctly and not required a fix later.

Thursday, October 6, 2016

JavaOne 2016 Observations by Proxy

I was not able to attend JavaOne 2016 and so am happy to see numerous resources online that allow me to make observations based on JavaOne 2016 content. I reference and briefly describe some of these JavaOne 2016 resources in this post and add some of my own observations based on use of those resources. These resources are useful to those of us who were, as Katharine has stated in JavaOne roundup, "unlucky enough to be stuck at home or work."

If time constraints allow it, the best source of information available online is often the presentations themselves. Select JavaOne 2016 presentations are available on YouTube and the keynotes are available On Demand. Summaries of the sessions that were streamed live can be found in the posts JavaOne Live Streaming Day 1 (September 19), JavaOne Live Streaming Day 2 (September 20), JavaOne Live Streaming Day 3 (September 21), and JavaOne Live Streaming Day 4 (September 22). The JavaOne Conference Twitter feed, @JavaOneConf, provides numerous photographs and links to more details about events, sessions, and activities of JavaOne 2016. Besides the live streaming posts, The Java Source Blog also provides brief posts representing JavaOne 2016 "Highlights" for each day: Monday, Tuesday, and Wednesday.

NetBeans Community Day at JavaOne 2016 Conference

NetBeans Community Day at JavaOne 2016 Conference was on Sunday, September 18. Mark Stephens has written a nice summary of the first two sessions of NetBeans Day 2016. Geertjan Wielenga has written about James Gosling on NetBeans and Apache. David R. Heffelfinger's The Coolest things I"ve seen at JavaOne 2016 so far reviews "several sessions showcasing NetBeans capabilities." A post written before NetBeans Community Day about that day also describes the scheduled presentations. Josh Juneau's post JavaOne 2016 Follow-Up provides an overview of the entire conference with significant coverage of NetBeans Community Day.

JavaOne 2016 Keynote

The JavaOne 2016 Keynote held on Sunday, September 18, and was actually multiple keynote addresses (Java Keynote, Intel Keynote, and Visionary Keynote). It has been reviewed several times, including the following:

JavaOne 2016 Community Keynote

Katharine's post JavaOne Community Keynote: IBM to open source Java SDK highlights arguably the biggest announcement at the Community Keynote: the IBM SDK for Java is Going Open Source (slides). Monica Beckwith's InfoQ post JavaOne 2016: IBM's Keynote – Accelerating Innovation with Java provides another overview of the IBM portion of the Community Keynote "Accelerating Innovation with Java: The Future is Today."

JavaOne 2016 Sessions Available/Highlighted Online

Several sessions of JavaOne 2016 were recorded, were reviewed, and/or have had their slides made available.

JavaOne 2017

JavaOne 2017 will be October 1-5, 2017.


Monday, September 19, 2016

Painful Reminder of Java Date Nuances

I don't need to use java.util.Date much anymore these days, but recently chose to do so and was reminded of the pain of using the APIs associated with Java Date. In this post, I look at a couple of the somewhat surprising API expectations of the deprecated parameterized Date constructor that accepts six integers.

In 2016, Java developers are probably most likely to use Java 8's new Date/Time API if writing new code in Java SE 8 or are likely to use a third-party Java date/time library such as Joda-Time if using a version of Java prior to Java 8. I chose to use Date recently in a very simple Java-based tool that I wanted to be deliverable as a single Java source code file (easy to compile without a build tool) and to not depend on any libraries outside Java SE. The target deployment environment for this simple tool is Java SE 7, so the Java 8 Date/Time API was not an option.

One of the disadvantages of the Date constructor that accepts six integers is the differentiation between those six integers and ensuring that they're provided in the proper order. Even when the proper order is enforced, there are subtle surprises associated with specifying the month and year. Perhaps the easiest way to properly instantiate a Date object is either via SimpleDateFormat.parse(String) or via the not-deprecated Date(long) constructor accepting milliseconds since epoch zero.

My first code listing demonstrates instantiation of a Date representing "26 September 2016" with 0 hours, 0 minutes, and 0 seconds. This code listing uses a String to instantiate the Date instance via use of SimpleDateFormat.parse(String).

final SimpleDateFormat formatter = new SimpleDateFormat(DEFAULT_FORMAT);
final Date controlDate = formatter.parse(CONTROL_DATE_TIME_STR);
printDate("Control Date/Time", controlDate);

When the above is run, the printed results are as expected and the output date matches the string provided and parsed for the instance of Date.

= Control Date/Time -> Mon Sep 26 00:00:00 MDT 2016

It can be tempting to use the Date constructors that accept integers to represent different "fields" of a Date instance, but these present the previously mentioned nuances.

The next code listing shows a very naive approach to invoking the Date constructor which accepts six integers representing these fields in this order: year, month, date, hour, minutes, seconds.

// This will NOT be the intended Date of 26 September 2016
// with 0 hours, 0 minutes, and 0 seconds because both the
// "month" and "year" parameters are NOT appropriate.
final Date naiveDate = new Date(2016, 9, 26, 0, 0, 0);
printDate("new Date(2016, 9, 26, 0, 0, 0)", naiveDate);

The output from running the above code has neither the same month (October rather than September) nor the same year (not 2016) as the "control" case shown earlier.

= new Date(2016, 9, 26, 0, 0, 0) -> Thu Oct 26 00:00:00 MDT 3916

The month was one later than we expected (October rather than September) because the month parameter is a zero-based parameter with January being represented by zero and September thus being represented by 8 instead of 9. One of the easiest ways to deal with the zero-based month and feature a more readable call to the Date constructor is to use the appropriate java.util.Calendar field for the month. The next example demonstrates doing this with Calendar.SEPTEMBER.

// This will NOT be the intended Date of 26 September 2016
// with 0 hours, 0 minutes, and 0 seconds because the
// "year" parameter is not correct.
final Date naiveDate = new Date(2016, Calendar.SEPTEMBER, 26, 0, 0, 0);
printDate("new Date(2016, Calendar.SEPTEMBER, 26, 0, 0, 0)", naiveDate);

The code snippet just listed fixes the month specification, but the year is still off as shown in the associated output that is shown next.

= new Date(2016, Calendar.SEPTEMBER, 26, 0, 0, 0) -> Tue Sep 26 00:00:00 MDT 3916

The year is still 1900 years off (3916 instead of 2016). This is due to the decision to have the first integer parameter to the six-integer Date constructor be a year specified as the year less 1900. So, providing "2016" as that first argument specifying the year as 2016 + 1900 = 3916. So, to fix this, we need to instead provide 116 (2016-1900) as the first int parameter to the constructor. To make this more readable to the normal person who would find this surprising, I like to code it literally as 2016-1900 as shown in the next code listing.

final Date date = new Date(2016-1900, Calendar.SEPTEMBER, 26, 0, 0, 0);
printDate("new Date(2016-1900, Calendar.SEPTEMBER, 26, 0, 0, 0)", date);

With the zero-based month used and with the intended year being expressed as the current year less 1900, the Date is instantiated correctly as demonstrated in the next output listing.

= new Date(2016-1900, Calendar.SEPTEMBER, 26, 0, 0, 0) -> Mon Sep 26 00:00:00 MDT 2016

The Javadoc documentation for Date does describe these nuances, but this is a reminder that it's often better to have clear, understandable APIs that don't need nuances described in comments. The Javadoc for the Date(int, int, int, int, int, int) constructor does advertise that the year needs 1900 subtracted from it and that the months are represented by integers from 0 through 11. It also describes why this six-integer constructor is deprecated: "As of JDK version 1.1, replaced by Calendar.set(year + 1900, month, date, hrs, min, sec) or GregorianCalendar(year + 1900, month, date, hrs, min, sec)."

The similar six-integer GregorianCalendar(int, int, int, int, int, int) constructor is not deprecated and, while it still expects a zero-based month parameter, it does not expect one to subtract the actual year by 1900 when proving the year parameter. When the month is specified using the appropriate Calendar month constant, this makes the API call far more readable when 2016 can be passed for the year and Calendar.SEPTEMBER can be passed for the month.

I use the Date class directly so rarely now that I forget its nuances and must re-learn them when the rare occasion presents itself for me to use Date again. So, I am leaving these observations regarding Date for my future self.

  1. If using Java 8+, use the Java 8 Date/Time API.
  2. If using a version of Java prior to Java 8, use Joda-Time or other improved Java library.
  3. If unable to use Java 8 or third-party library, use Calendar instead of Date as much as possible and especially for instantiation.
  4. If using Date anyway, instantiate the Date using either the SimpleDateFormat.parse(String) approach or using Date(long) to instantiate the Date based on milliseconds since epoch zero.
  5. If using the Date constructors accepting multiple integers representing date/time components individually, use the appropriate Calendar month field to make API calls more readable and consider writing a simple builder to "wrap" the calls to the six-integer constructor.

We can learn a lot about what makes an API useful and easy to learn and what makes an API more difficult to learn from using other peoples' APIs. Hopefully these lessons learned will benefit us in writing our own APIs. The Date(int, int, int, int, int, int) constructor that was the focus of this post presents several issues that make for a less than optimal API. The multiple parameters of the same type make it easy to provide the parameters out of order and the "not natural" rules related to providing year and month make put extra burden on the client developer to read the Javadoc to understand these not-so-obvious rules.

Tuesday, September 13, 2016

Apache NetBeans?

It's fairly common to have significant announcements related to the world of Java released in the days and weeks leading up to JavaOne. With that in mind, it's not surprising that we're seeing some significant Java-related announcements just prior to JavaOne 2016 that begins next week. One announcement is Mark Reinhold's Proposed schedule change for JDK 9 in which Reinhold proposes "a four-month extension of the JDK 9 schedule, moving the General Availability (GA) milestone to July 2017." Another major proposal, the subject of this post, is the proposal by Oracle for Oracle to "contribut[e] the NetBeans IDE as a new open-source project within the Apache Incubator."

The Apache NetBeans proposal is summarized on, but additional details are available on Apache Software Foundation's Incubator Wiki page called NetBeansProposal. The NetBeansProposal Wiki page provides several details related to the benefits, costs, and risks associated with moving NetBeans to the Apache Software Foundation. Additional views on this proposal that summarize or interpret the proposal can be found in online resources such as Proposal has NetBeans moving to Apache Incubator, Oracle's NetBeans Headed to The Apache Software Foundation, Oracle no more - NetBeans is moving to Apache, Java founder James Gosling endorses Apache takeover of NetBeans Java IDE, An unexpected proposal: Oracle bids farewell to NetBeans, and Oracle Proposes NetBeans IDE as Apache Incubator Project. There are also two Reddit threads on this subject on the subreddits programming and java.

I've felt for some time that the open source projects I'm most willing to "take a chance on" and recommend to management and customers are those that have either strong corporate sponsorship or are affiliated with an established and successful umbrella organization such as Apache Software Foundation. Therefore, although I don't like to see NetBeans lose the corporate backing and investment of Oracle, the Apache Software Foundation does provide a home for NetBeans to continue being a successful project.

Like many software developers who have been working in this area for years, I've been using Apache Software Foundation projects for most of those years. The liberal Apache 2 license is welcoming and uncomplicated. The projects tend to be well run and well used. On occasion when projects are no longer active, the ASF is fairly timely in moving such projects to the Apache Attic. Projects associated with ASF tend to enjoy benefits often associated with open source such as multiple contributors including multiple reviewers and real-life "testers." Many of the ASF projects enjoy a large community with the accompanying benefits of a large community such as improved main site documentation as well as third-party supplemental documentation with blogs, books, and articles. Of course, NetBeans already enjoys much of this, so moving to ASF might be more of an approach to retain some of the advantages it already enjoys while at the same time potentially encouraging greater community collaboration.

The Apache Software Foundation projects I've used over the years seem to come from two different types of origins. Some of them have been associated with ASF from their beginning or almost their beginning while others were popular projects already when they were moved to the ASF. NetBeans falls in the latter category with other projects that I used before they went to ASF such as Groovy (from SpringSource/Pivotal) and Flex (from Adobe). It seems likely that Oracle has proposed donating NetBeans to Apache Software Foundation for the same reasons that Pivotal and Adobe donated Groovy and Flex respectively to Apache Software Foundation.

The examples just mentioned (Adobe|Flex, Pivotal|Groovy, and Oracle|NetBeans) are just a subset of examples that could be cited in which corporations who are the sponsors and dominant contributors have given away the open source project, typically with the intent to spend fewer resources managing that project. If NetBeans is able to enjoy significant community contributions, the disadvantages of reduced corporate sponsorship might be at least partially offset. Some of this, of course, depends on what level of involvement Oracle supports its employees in contributing to NetBeans.

When Oracle acquired Sun, many of us wondered about the future of GlassFish (Oracle had already acquired WebLogic from BEA) and NetBeans (Oracle already had a free, but not open source, Java IDE in JDeveloper). Oracle announced in 2013 that GlassFish 4.x would not be available as a commercial offering and would only continue as an unsupported Java EE reference implementation (though third-party support can be found for the "drop-in replacement" Payara Server). Although there are some advantages to this "developer-friendly" reference implementation in terms of trying new Java EE features and learning Java EE concepts, most Java EE developers I'm aware of who use an open source Java EE application server for production have moved to WildFly. Given this, I've been happy to see NetBeans moving along and being supported as well as it has for as many years as it has.

One potentially new prospect for NetBeans is being the basis for more specialized IDEs. Eclipse has long been the basis of specialized IDEs and development tool suites such as Spring Tool Suite (Spring IDE), Oracle Enterprise Pack for Eclipse, Adobe Flash Builder, Red Hat JBoss Developer Studio, and Zend Studio. Similarly, Android Studio is built on IntelliJ IDEA. Although there are already tools based on NetBeans (such as VisualVM), NetBeans's independence from Oracle may seem more attractive to some for future tools' development.

At the time of this writing, the NetBeansProposal Wiki page already lists 63 people in "the initial list of individual contributors" (including 26 people contributors associated with Oracle). That, along with the extensive resources already available related to NetBeans, encourage me and make me think that NetBeans could be a successful and thriving Apache Software Foundation project. I certainly prefer NetBeans's chances as an Apache Software Foundation project over its chances if it existed in a state similar to that placed upon GlassFish.

We Java developers are fortunate to have multiple very strong IDEs available for our use. It's in our best interest if they can each remain strong and viable as all the IDEs (and the developers who use them) benefit from the competition and from the innovation that talented developers working on these IDEs bring to our development experience. Each of the IDEs offers different advantages and has different strengths and I'm hoping that we can benefit from NetBeans's current strengths and future strengths for years to come.

Monday, September 12, 2016

More on Spooling Queries and Results in psql

In the recent blog post SPOOLing Queries with Results in psql, I looked briefly at some PostgreSQL database psql meta-commands and options that can be used to emulate Oracle database's SQL*Plus spooling behavior. In that post, I wrote, "I have not been able to figure out a way to ... have both the query and its results written to the file without needing to use \qecho." Fortunately, since that writing, a colleague pointed me to the psql option --log-file (or -L).

The PostgreSQL psql documentation states that the --log-file / -L option "write[s] all query output into file filename, in addition to the normal output destination." This handy single option prints both the query and its non-error results to the indicated file. For example, if I start psql with the command "psql -U postgres -L C:\output\albums.txt" and then run the query select * from albums;, the generated file C:\output\albums.txt appears like this:

********* QUERY **********
select * from albums;

           title           |     artist      | year 
 Back in Black             | AC/DC           | 1980
 Slippery When Wet         | Bon Jovi        | 1986
 Third Stage               | Boston          | 1986
 Hysteria                  | Def Leppard     | 1987
 Some Great Reward         | Depeche Mode    | 1984
 Violator                  | Depeche Mode    | 1990
 Brothers in Arms          | Dire Straits    | 1985
 Rio                       | Duran Duran     | 1982
 Hotel California          | Eagles          | 1976
 Rumours                   | Fleetwood Mac   | 1977
 Kick                      | INXS            | 1987
 Appetite for Destruction  | Guns N' Roses   | 1987
 Thriller                  | Michael Jackson | 1982
 Welcome to the Real World | Mr. Mister      | 1985
 Never Mind                | Nirvana         | 1991
 Please                    | Pet Shop Boys   | 1986
 The Dark Side of the Moon | Pink Floyd      | 1973
 Look Sharp!               | Roxette         | 1988
 Songs from the Big Chair  | Tears for Fears | 1985
 Synchronicity             | The Police      | 1983
 Into the Gap              | Thompson Twins  | 1984
 The Joshua Tree           | U2              | 1987
 1984                      | Van Halen       | 1984
(23 rows)

One drawback when using -L is that any error messages are not written to the file that the queries and successful results are written to. The next screen snapshot demonstrates an error caused by querying from the column name rather than from the table name and the listing after the screen snapshot shows what appears in the output file.

********* QUERY **********
select * from artist;

The output file generated with psql's -L option shows the incorrect query, but the generated file does not include the error message that was shown in the psql terminal application ('ERROR: relation "artist" does not exist'). I don't know of any way to easily ensure that this error message is written to the same file that the query is written to. Redirection of standard output and standard error is a possibility, but then I'd need to redirect the error messages to a different file than the file to which the query and output are being written based on the filename provided with the -L option.

Saturday, September 10, 2016

AutoCommit in PostgreSQL's psql

One potential surprise for someone familiar with Oracle database's SQL*Plus when being introduced to PostgreSQL database's psql may be psql's default enabling of autocommit. This post provides an overview of psql's handling of autocommit and some related nuances.

By default, Oracle's SQL*Plus command-line tool does not automatically commit DML statements and the operator must explicitly commit these statements as part of a transaction (or exit from the session without rolling back). Because of this, developers and administrators familiar with using SQL*Plus to work with the Oracle database might be a bit surprised when the opposite is true for PostgreSQL and its psql command-line tool. Auto-commit is turned on by default in psql, meaning that every statement (including DML statements such as INSERT, UPDATE, and DELETE statements) are automatically committed once submitted.

One consequence of PostgreSQL's psql enabling autocommit by default is that COMMIT statements are unnecessary. When one tries to submit a commit; in psql with autocommit enabled, the WARNING-level message "there is no transaction in progress" is shown. This is demonstrated in the next screen snapshot.

The remainder of this post looks at how to turn off this automatic committing of all manipulation statements in psql.

One often cited approach to overriding psql's autocommit default is to explicitly begin a transaction with the BEGIN keyword and then psql won't commit until an explicit commit is provided. However, this can become a bit tedious over time and fortunately PostgreSQL's psql provides a convenient way of configuring psql to have autocommit disabled.

Before getting into the easy approach used to disable autocommit in psql, I'll point out here that one should not confuse the advise for ECPG (Embedded SQL in C). When using ECPG, the "SET AUTOCOMMIT" section of the PostgreSQL documentation on ECPG applies. Although this only applies to ECPG and does NOT apply to psql, it might be easy to not realize that as one of the first responses to a Google search for "psql autocommit" is this ECPG-specific manual page. That ECPG-specific manual page states that the command looks like "SET AUTOCOMMIT { = | TO } { ON | OFF }" and adds, "By default, embedded SQL programs are not in autocommit mode, so COMMIT needs to be issued explicitly when desired." This is like Oracle's SQL*Plus and is not how psql behaves by default.

Fortunately, it's very easy to disable autocommit in psql. One merely needs to enter the following at the psql command prompt (AUTOCOMMIT is case sensitive and should be all uppercase):

      \set AUTOCOMMIT off

This simple command disables autocommit for the session. One can determine whether autocommit is enabled with a simple \echo meta-command like this (AUTOCOMMIT is case sensitive and all uppercase and prefixed with colon indicating it's a variable):

      \echo :AUTOCOMMIT

The next screen snapshot demonstrates the discussion so far. It uses an \echo to indicate the default nature of autocommit (on) and how use of \set AUTOCOMMIT allows it to be disabled (off).

If it's desired to "always" have autocommit disabled, the \set AUTOCOMMIT off meta-command can be added to one's local ~/.psqlrc file. For an even more global setting, this meta-command can be placed in a psqlrc file in the database's system config directory (which can be located using PostgreSQL operating system-level command pg_config --sysconfdir as shown in the next screen snapshot).

One last nuance to be wary of when using psql and dealing with autocommit, is to realize that show AUTOCOMMIT; is generally not useful. In PostgreSQL 9.5, as the next screen snapshot demonstrates, an error message makes it clear that it's not even available anymore.


Although autocommit is enabled by default in PostgreSQL database's psql command-line tool, it can be easily disabled using \set AUTOCOMMIT off explicitly in a session or via configuration in the personal ~/.psqlrc file or in the global system configuration psqlrc file.