I have read KASAHARA [KAS] with @kyotomathmath. This book brings me into consideration on the definition of differentiations ∂z,∂¯z. Let’s take a look at what happens in discussing differentiations of complex functions.
The organization of this article follows that of [KAS]; hence the chapter 1 of this corresponds to the chapter 1 of [KAS], etc.
Let’s define (formal) partial differentials ∂∂z,∂∂¯z for complex functions.
Fix a field k, and consider an algebra A over k.
A linear map D:A→A is called a derivation is D(ab)=D(a)b+aD(b)(∀a,b∈A).
Denote the set of all derivations on A by DerkA: DerkA:={D:A→A:derivation}.
When there is no ambiguity, we denote DerkA simply by DerA.
It’s easy to show that DerA⊂gl(A) is a Lie subalgebra of gl(A).
Let k=R and A=∑∞i=1Ci(D), where D⊂Rn and Ci(D):={f:D→R:Ci class}.
The partial differentials ∂i:A⟶A∈∈f⟼∂∂xif are derivations on A.
The following lemma is useful to construct a new derivation from old ones.
Let k be a field, A,B algebras /k.
This lemma can easily be verified by straightforward calculations.
Again, consider the case k=R, D⊂Rn, A=∑Ci(D). Since there exists an isomorphism (∑Ci(D))m≅∑Ci(D,Rm) as vector spaces /R, where Ci(D,Rm):={f:D→Rm:Ci class}, and since Am is an algebra /R by the above lemma, ∑Ci(D,Rm) can be regarded as an algebra /R.
Then, e.g. ∂i1⊕⋯⊕∂im∈Der(∑Ci(D,Rm)) (1⩽∀ij⩽n).
Next, suppose we have an extension of fields k⊂K. Any vector space V /k can be extended to the vector space V⊗kK /K. Explicitly, the scalar multiplication on V⊗kK is given by μ⋅(v⊗λ):=v⊗(μλ)(v∈V,λ,μ∈K).
Suppose, furthermore, A is an algebra /k and consider the vector space A⊗kK /K. This is now an algebra /K; indeed, the ring hom k→A induces a ring hom K(≅k⊗kK)→A⊗kK, which makes A⊗kK into an algebra /K.
Let k⊂K be extension of fields, A algebra /k, A⊗kK algebra /K.
i) It’s obvious that f⊗1 preserves addition.
For any x∈A, λ,μ∈K, (f⊗1)(μ⋅(x⊗λ))=f(x)⊗μλ=μ(f(x)⊗λ)=μ(f⊗1)(x⊗λ).
ii) We’ve already seen that D⊗1 is K-linear. To see that is is also a derivation, note that, for x,y∈A, λ,μ∈K, (x⊗λ)⋅(y⊗μ)=(xy)⊗(λμ). Thus, for any x,y∈A, λ,μ∈K, we have (D⊗1)((x⊗λ)(y⊗μ))=D(xy)⊗λμ=D(x)y⊗λμ+xD(y)⊗λμ=((D⊗1)(x⊗λ))⋅(y⊗μ)+(x⊗λ)⋅((D⊗1)(y⊗μ)).
In the following discussion, let’s focus on the special case R⊂C, the degree of whose extension is 2.
First, let’s describe the complexification A⊗RC more explicitly. Since C=R⊕iR, we can write, for any vector space V /R, V⊗RC=(V⊗RR)⊕(V⊗RiR)=V⊕iV as vector spaces /R.
If A is an algebra /R, let J:A⊗RC⟶A⊗RC:R-linear,∈∈a⊗z⟼a⊗iz, so that J2=−id. Therefore we have a ring hom C⟶EndR(A⊗RC)∈∈x+iy⟼x⋅id+y⋅J. Endowed with this structure map, A⊗RC becomes an algebra /C as described above.
For a linear map f:V→W between vector spaces /R, the C-linear map f⊗1:V⊗RC→W⊗RC is now given by the formula V⊕iV⟶W⊕iW∈∈v+iw⟼f(v)+if(w); hence denoted by f⊕f:V⊗RC→W⊗RC.
Let D⊂R2, A=∑Ci(D), A2=∑Ci(D,R2).
By the above lemma, ∂j⊕∂j∈DerA2 (1⩽∀j⩽2), regarded as an algebra /C. Since DerA2 is a vector space /C, ∂z:=12(∂x⊕∂x−i∂y⊕∂y),∂¯z:=12(∂x⊕∂x+i∂y⊕∂y) are also derivations on A2: ∂z,∂¯z∈DerA2. These satisfies ∂x⊕∂x=∂z+∂¯z,∂y⊕∂y=i(∂z−∂¯z), which justifies the notation “∂z, ∂¯z”.
A⊗RC is equipped with another inmortant endmorphism S∈EndR(A⊗RC) represented by either S:A⊗RC⟶A⊗RC∈∈a⊗z⟼a⊗¯z, S:A⊕iA⟶A⊕iA∈∈a+ib⟼a−ib. In general, an endmorphism S∈EndRV of a vector space V /R is called an involution if S2=id but S≠id.
The following lemma shows the compatibility of the involution S on A⊗RC with that on C.
Let A,B be algebras /R, SA,SB involutions on A,B, resp., given above.
i) ∀a∈A, ∀z∈C, we have (SB∘(f⊗1))(a⊗z)=SB(f(a)⊗z)=f(a)⊗¯z,((f⊗1)∘SA)(a⊗z)=(f⊗1)(a⊗¯z)=f(a)⊗¯z; hence SB∘(f⊗1)=(f⊗1)∘SA.
ii) THe first assertion is obvious as SB∘(F1+F2)=SB∘F1+SB∘F2=G1∘SA+G2∘SA=(G1+G2)∘SA.
The second follows from that ∀a∈A, ∀z∈C, we have SB(λF1)(a⊗z)=SB(F1(a⊗λz))=G1(SA(a⊗λz))=G1(a⊗¯λ¯z)=¯λ⋅(G1∘SA(a⊗z)).
Let D⊂R2, A=∑Ci(D), A2=∑Ci(D,R2).
Since ∂x,∂y∈DerA⊂EndRA, by the above lemma, SA∘∂¯z=∂z∘SA, which is expressed effectively as ¯∂∂¯zf=∂∂z¯f(∀f∈A), where ¯f denotes the map ¯f(x,y):=¯f(x,y).
The lemma 1.1.1 of [KAS] is now obvious from above consideration.
Now that we define ∂z,∂¯z by ∂z:=12(∂x⊕∂x−i∂y⊕∂y),∂¯z:=12(∂x⊕∂x+i∂y⊕∂y), and from this follow the relations ∂zz=1,∂z¯z=0,∂¯zz=0,∂¯z¯z=1.
Conversely, if we start with ∂zz=1, etc., then we can derive the formula for ∂z,∂¯z, or equivalently, \begin{align*} \xder \oplus \xder &= \zder + \bder, \\ \yder \oplus \yder &= \iunit (\zder - \bder). \end{align*}
Let \dcal \subset \R^2, A = \sum C^i (\dcal), A^2 = \sum C^i (\dcal, \R^2), as above, and \zder, \bder \in \Der A^2 satisfy \begin{align*} \zder z = 1, &\quad \zder \zbar = 0, \\ \bder z = 0, &\quad \bder \zbar = 1. \end{align*}
For x = (z + \zbar) / 2, y = (z - \zbar) / 2 \iunit, we have \begin{align*} \zder x = \frac 12, &\quad \zder y = \frac 1{2\iunit}, \\ \bder x = \frac 12, &\quad \bder y = - \frac 1{2\iunit}. \end{align*}
By the composition laws, which we haven’t proved yet, we want \begin{align*} \xder z = 1, &\quad \xder \zbar = 1, \\ \yder z = \iunit, &\quad \yder \zbar = -\iunit, \end{align*} and \forall f \in A^2, \begin{align*} \xder f &= (\xder z) (\zder f) + (\xder \zbar) (\bder f) \\ &= \zder f + \bder f, \\ \yder f &= (\yder z) (\zder f) + (\yder \zbar) (\bder f) \\ &= \iunit \zder f - \iunit \bder f. \end{align*}
So, it’s reasonable to define \begin{align*} \xder &:= \zder + \bder, \\ \yder &:= \iunit (\zder - \bder). \end{align*}
Then, are these operators agree with \xder \oplus \xder and \yder \oplus \yder, resp.?
For definiteness, we denote the new operators instead by \begin{align*} \xder' &:= \zder + \bder, \\ \yder' &:= \iunit (\zder - \bder). \end{align*} What we must show is then \forall f \in A, \begin{align*} \xder' f = \xder f, \\ \yder' f = \yder f. \end{align*}
First note that \begin{align*} \xder' x = 1, &\quad \xder' y = 0, \\ \yder' x = 0, &\quad \yder' y = 1, \end{align*} We can see the linear independence of \xder', \yder' from these relations: for any \lambda, \mu \in \C, if \lambda \xder' + \mu \yder' = 0, then applying x to the both sies gives \lambda = (\lambda \xder' + \mu \yder') (x) = 0, and similarly we obtain \mu = 0.
Let B := \{ f \in A \colon \text{polynomial} \} = \R [x, y].
Since \xder', \yder' \in \Der A^2, we have \xder' = \xder and \yder' = \yder on B (using the defining properties of derivations). So, if \dcal is bounded, then, by the Stone-Weierstrass’ theorem, B is dense in A; hence \xder' = \xder and \yder' = \yder on A.
Otherwise, since \R^2 is locally compact, there exists a family (\bcal_j)_{j \in I} of bounded sets indexed by some set I such that \dcal = \bigcup_{j \in I} \bcal_j. Then, \xder' = \xder and \yder' = \yder on each \sum C^i (\bcal_j), and hence on A because \sum C^i (\dcal) \subset \bigcup_j \sum C^i (\bcal_j).