ATS (programming language)

ATS (Applied Type System) is a programming language designed to unify programming with formal specification. ATS has support for combining theorem proving with practical programming through the use of advanced type systems.[2] A past version of The Computer Language Benchmarks Game has demonstrated that the performance of ATS is comparable to that of the C and C++ programming languages.[3] By using theorem proving and strict type checking, the compiler can detect and prove that its implemented functions are not susceptible to bugs such as division by zero, memory leaks, buffer overflow, and other forms of memory corruption by verifying pointer arithmetic and reference counting before the program compiles. Additionally, by using the integrated theorem-proving system of ATS (ATS/LF), the programmer may make use of static constructs that are intertwined with the operative code to prove that a function attains its specification.

ATS
Paradigmmulti-paradigm: functional, imperative
Designed byHongwei Xi at Boston University
Stable release
ATS2-0.3.13[1] / February 16, 2019 (2019-02-16)
Typing disciplinestatic
LicenseGPLv3
Filename extensions.sats, .dats, .cats, hats.
Websitehttp://www.ats-lang.org/
Influenced by
Dependent ML, ML, OCaml, C++

History

ATS is derived mostly from the ML and OCaml programming languages. An earlier language, Dependent ML, by the same author has been incorporated by the language.

The latest version of ATS1 (Anairiats) was released as v0.2.12 on 2015-01-20.[4] The first version of ATS2 (Postiats) was released in September 2013.[5]

Theorem proving

The primary focus of ATS is to support theorem proving in combination with practical programming.[2] With theorem proving one can prove, for instance, that an implemented function does not produce memory leaks. It also prevents other bugs that might otherwise only be found during testing. It incorporates a system similar to those of proof assistants which usually only aim to verify mathematical proofs—except ATS uses this ability to prove that the implementations of its functions operate correctly, and produce the expected output.

As a simple example, in a function using division, the programmer may prove that the divisor will never equal zero, preventing a division by zero error. Let's say, the divisor 'X' was computed as 5 times the length of list 'A'. One can prove, that in the case of a non-empty list, 'X' is non-zero, since 'X' is the product of two non-zero numbers (5 and the length of 'A'). A more practical example would be proving through reference counting that the retain count on an allocated block of memory is being counted correctly for each pointer. Then one can know, and quite literally prove, that the object will not be deallocated prematurely, and that memory leaks will not occur.

The benefit of the ATS system is that since all theorem proving occurs strictly within the compiler, it has no effect on the speed of the executable program. ATS code is often harder to compile than standard C code, but once it compiles the programmer can be certain that it is running correctly to the degree specified by their proofs (assuming the compiler and runtime system are correct).

In ATS proofs are separate from implementation, so it is possible to implement a function without proving it if the programmer so desires.

Data representation

According to the author (Hongwei Xi), ATS's efficiency[6] is largely due to the way that data is represented in the language and tail-call optimizations (which are generally important for the efficiency of functional programming languages). Data can be stored in a flat or unboxed representation rather than a boxed representation.

Theorem Proving: An introductory case

Propositions

dataprop expresses predicates as algebraic types.

Predicates in pseudo‑code somewhat similar to ATS source (see below for valid ATS source):

 FACT(n, r)         iff    fact(n) = r
 MUL(n, m, prod)    iff    n * m = prod
  
 FACT(n, r) = 
       FACT(0, 1) 
     | FACT(n, r) iff FACT(n-1, r1) and MUL(n, r1, r)   // for n > 0
 
 // expresses fact(n) = r  iff  r = n * r1  and  r1 = fact(n-1)

In ATS code:

 dataprop FACT (int, int) =
   | FACTbas (0, 1)             // basic case: FACT(0, 1)
   | {n:int | n > 0} {r,r1:int} // inductive case
     FACTind (n, r) of (FACT (n-1, r1), MUL (n, r1, r))

where FACT (int, int) is a proof type

Example

Non tail-recursive factorial with proposition or "Theorem" proving through the construction dataprop.

The evaluation of fact1(n-1) returns a pair (proof_n_minus_1 | result_of_n_minus_1) which is used in the calculation of fact1(n). The proofs express the predicates of the proposition.

Part 1 (algorithm and propositions)

 [FACT (n, r)] implies [fact (n) = r]
 [MUL (n, m, prod)] implies [n * m = prod]
 FACT (0, 1)
 FACT (n, r) iff FACT (n-1, r1) and MUL (n, r1, r) forall n > 0

To remember:

{...} universal quantification
[...] existential quantification
(... | ...)   (proof | value)
@(...) flat tuple or variadic function parameters tuple
.<...>. termination metric[7]
#include "share/atspre_staload.hats"

dataprop FACT (int, int) =
  | FACTbas (0, 1) of () // basic case
  | {n:nat}{r:int}       // inductive case
    FACTind (n+1, (n+1)*r) of (FACT (n, r))

(* note that int(x) , also int x, is the monovalued type of the int x value.

 The function signature below says:
 forall n:nat, exists r:int where fact( num: int(n)) returns (FACT (n, r) | int(r)) *)

fun fact{n:nat} .<n>. (n: int (n)) : [r:int] (FACT (n, r) | int(r)) =
(
  ifcase
  | n > 0 => ((FACTind(pf1) | n * r1)) where 
  { 
    val (pf1 | r1) = fact (n-1)
  }
  | _(*else*) => (FACTbas() | 1)
)

Part 2 (routines and test)

implement main0 (argc, argv) =
{
  val () = if (argc != 2) then prerrln! ("Usage: ", argv[0], " <integer>")

  val () = assert (argc >= 2)
  val n0 = g0string2int (argv[1])
  val n0 = g1ofg0 (n0)
  val () = assert (n0 >= 0)
  val (_(*pf*) | res) = fact (n0)

  val ((*void*)) = println! ("fact(", n0, ") = ", res)
}

This can all be added to a single file and compiled as follows. Compilation should work with various back end C compilers, e.g. gcc. Garbage collection is not used unless explicitly stated with -D_ATS_GCATS )[8]

$ patscc fact1.dats -o fact1
$ ./fact1 4

compiles and gives the expected result

Features

Basic types

  • bool (true, false)
  • int (literals: 255, 0377, 0xFF), unary minus as ~ (as in ML)
  • double
  • char 'a'
  • string "abc"

Tuples and records

prefix @ or none means direct, flat or unboxed allocation
  val x : @(int, char) = @(15, 'c')  // x.0 = 15 ; x.1 = 'c'
  val @(a, b) = x                    // pattern matching binding, a= 15, b='c'
  val x = @{first=15, second='c'}    // x.first = 15
  val @{first=a, second=b} = x       // a= 15, b='c'
  val @{second=b, ...} = x           // with omission, b='c'
prefix ' means indirect or boxed allocation
  val x : '(int, char) = '(15, 'c')  // x.0 = 15 ; x.1 = 'c'
  val '(a, b) = x                    // a= 15, b='c'
  val x = '{first=15, second='c'}    // x.first = 15
  val '{first=a, second=b} = x       // a= 15, b='c'
  val '{second=b, ...} = x           // b='c'
special

With '|' as separator, some functions return wrapped the result value with an evaluation of predicates

val ( predicate_proofs | values) = myfunct params

Common

{...} universal quantification
[...] existential quantification
(...) parenthetical expression or tuple

(... | ...)     (proofs | values)
.<...>. termination metric

@(...)          flat tuple or variadic function parameters tuple (see example's printf)

@[byte][BUFLEN]     type of an array of BUFLEN values of type byte[9]
@[byte][BUFLEN]()   array instance
@[byte][BUFLEN](0)  array initialized to 0

Dictionary

sort
domain
 sortdef nat = {a: int | a >= 0 }     // from prelude:  ''a''  int ...

 typedef String = [a:nat] string(a)   // [..]:  ''a''  nat ...
type (as sort)
generic sort for elements with the length of a pointer word, to be used in type parameterized polymorphic functions. Also "boxed types"[10]
// {..}: ∀ a,b ∈ type ...
fun {a,b:type} swap_type_type (xy: @(a, b)): @(b, a) = (xy.1, xy.0)
t@ype
linear version of previous type with abstracted length. Also unboxed types.[10]
viewtype
a domain class like type with a view (memory association)
viewt@ype
linear version of viewtype with abstracted length. It supersets viewtype
view
relation of a Type and a memory location. The infix @ is its most common constructor
T @ L asserts that there is a view of type T at location L
 fun {a:t@ype} ptr_get0 {l:addr} (pf: a @ l | p: ptr l): @(a @ l | a)
 
 fun {a:t@ype} ptr_set0 {l:addr} (pf: a? @ l | p: ptr l, x: a): @(a @ l | void)
the type of ptr_get0 (T) is ∀ l : addr . ( T @ l | ptr( l ) ) -> ( T @ l | T) // see manual, section 7.1. Safe Memory Access through Pointers[11]
viewdef array_v (a:viewt@ype, n:int, l: addr) = @[a][n] @ l
T?
possibly uninitialized type

pattern matching exhaustivity

as in case+, val+, type+, viewtype+, ...

  • with suffix '+' the compiler issues an error in case of non exhaustive alternatives
  • without suffix the compiler issues a warning
  • with '-' as suffix, avoids exhaustivity control

modules

 staload "foo.sats" // foo.sats is loaded and then opened into the current namespace

 staload F = "foo.sats" // to use identifiers qualified as $F.bar

 dynload "foo.dats" // loaded dynamically at run-time

dataview

Dataviews are often declared to encode recursively defined relations on linear resources.[12]

dataview array_v (a: viewt@ype+, int, addr) = | {l: addr} array_v_none (a, 0, l) | {n: nat} {l: addr} array_v_some (a, n+1, l) of (a @ l, array_v (a, n, l+sizeof a))

datatype / dataviewtype

Datatypes[13]

datatype workday = Mon | Tue | Wed | Thu | Fri

lists

datatype list0 (a:t@ype) = list0_cons (a) of (a, list0 a) | list0_nil (a)

dataviewtype

A dataviewtype is similar to a datatype, but it is linear. With a dataviewtype, the programmer is allowed to explicitly free (or deallocate) in a safe manner the memory used for storing constructors associated with the dataviewtype.[14]

variables

local variables

var res: int with pf_res = 1   // introduces pf_res as an alias of view @ (res)

on stack array allocation:

#define BUFLEN 10
var !p_buf with pf_buf = @[byte][BUFLEN](0)    // pf_buf = @[byte][BUFLEN](0) @ p_buf[15]

See val and var declarations[16]

gollark: Yes it is, people are apparently just used to it now?
gollark: I like how Electron applications casually use more RAM than cutting-edge machine learning.
gollark: FEAR extremely inefficient use of VRAM by my stuff.
gollark: I mean, maaaybe, but the other thing works.
gollark: Excellent, it works.

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.