When package ssa was created, Type was in package gc.
To avoid circular dependencies, we used an interface (ssa.Type)
to represent type information in SSA.
In the Go 1.9 cycle, gri extricated the Type type from package gc.
As a result, we can now use it in package ssa.
Now, instead of package types depending on package ssa,
it is the other way.
This is a more sensible dependency tree,
and helps compiler performance a bit.
Though this is a big CL, most of the changes are
mechanical and uninteresting.
Interesting bits:
* Add new singleton globals to package types for the special
SSA types Memory, Void, Invalid, Flags, and Int128.
* Add two new Types, TSSA for the special types,
and TTUPLE, for SSA tuple types.
ssa.MakeTuple is now types.NewTuple.
* Move type comparison result constants CMPlt, CMPeq, and CMPgt
to package types.
* We had picked the name "types" in our rules for the handy
list of types provided by ssa.Config. That conflicted with
the types package name, so change it to "typ".
* Update the type comparison routine to handle tuples and special
types inline.
* Teach gc/fmt.go how to print special types.
* We can now eliminate ElemTypes in favor of just Elem,
and probably also some other duplicated Type methods
designed to return ssa.Type instead of *types.Type.
* The ssa tests were using their own dummy types,
and they were not particularly careful about types in general.
Of necessity, this CL switches them to use *types.Type;
it does not make them more type-accurate.
Unfortunately, using types.Type means initializing a bit
of the types universe.
This is prime for refactoring and improvement.
This shrinks ssa.Value; it now fits in a smaller size class
on 64 bit systems. This doesn't have a giant impact,
though, since most Values are preallocated in a chunk.
name old alloc/op new alloc/op delta
Template 37.9MB ± 0% 37.7MB ± 0% -0.57% (p=0.000 n=10+8)
Unicode 28.9MB ± 0% 28.7MB ± 0% -0.52% (p=0.000 n=10+10)
GoTypes 110MB ± 0% 109MB ± 0% -0.88% (p=0.000 n=10+10)
Flate 24.7MB ± 0% 24.6MB ± 0% -0.66% (p=0.000 n=10+10)
GoParser 31.1MB ± 0% 30.9MB ± 0% -0.61% (p=0.000 n=10+9)
Reflect 73.9MB ± 0% 73.4MB ± 0% -0.62% (p=0.000 n=10+8)
Tar 25.8MB ± 0% 25.6MB ± 0% -0.77% (p=0.000 n=9+10)
XML 41.2MB ± 0% 40.9MB ± 0% -0.80% (p=0.000 n=10+10)
[Geo mean] 40.5MB 40.3MB -0.68%
name old allocs/op new allocs/op delta
Template 385k ± 0% 386k ± 0% ~ (p=0.356 n=10+9)
Unicode 343k ± 1% 344k ± 0% ~ (p=0.481 n=10+10)
GoTypes 1.16M ± 0% 1.16M ± 0% -0.16% (p=0.004 n=10+10)
Flate 238k ± 1% 238k ± 1% ~ (p=0.853 n=10+10)
GoParser 320k ± 0% 320k ± 0% ~ (p=0.720 n=10+9)
Reflect 957k ± 0% 957k ± 0% ~ (p=0.460 n=10+8)
Tar 252k ± 0% 252k ± 0% ~ (p=0.133 n=9+10)
XML 400k ± 0% 400k ± 0% ~ (p=0.796 n=10+10)
[Geo mean] 428k 428k -0.01%
Removing all the interface calls helps non-trivially with CPU, though.
name old time/op new time/op delta
Template 178ms ± 4% 173ms ± 3% -2.90% (p=0.000 n=94+96)
Unicode 85.0ms ± 4% 83.9ms ± 4% -1.23% (p=0.000 n=96+96)
GoTypes 543ms ± 3% 528ms ± 3% -2.73% (p=0.000 n=98+96)
Flate 116ms ± 3% 113ms ± 4% -2.34% (p=0.000 n=96+99)
GoParser 144ms ± 3% 140ms ± 4% -2.80% (p=0.000 n=99+97)
Reflect 344ms ± 3% 334ms ± 4% -3.02% (p=0.000 n=100+99)
Tar 106ms ± 5% 103ms ± 4% -3.30% (p=0.000 n=98+94)
XML 198ms ± 5% 192ms ± 4% -2.88% (p=0.000 n=92+95)
[Geo mean] 178ms 173ms -2.65%
name old user-time/op new user-time/op delta
Template 229ms ± 5% 224ms ± 5% -2.36% (p=0.000 n=95+99)
Unicode 107ms ± 6% 106ms ± 5% -1.13% (p=0.001 n=93+95)
GoTypes 696ms ± 4% 679ms ± 4% -2.45% (p=0.000 n=97+99)
Flate 137ms ± 4% 134ms ± 5% -2.66% (p=0.000 n=99+96)
GoParser 176ms ± 5% 172ms ± 8% -2.27% (p=0.000 n=98+100)
Reflect 430ms ± 6% 411ms ± 5% -4.46% (p=0.000 n=100+92)
Tar 128ms ±13% 123ms ±13% -4.21% (p=0.000 n=100+100)
XML 239ms ± 6% 233ms ± 6% -2.50% (p=0.000 n=95+97)
[Geo mean] 220ms 213ms -2.76%
Change-Id: I15c7d6268347f8358e75066dfdbd77db24e8d0c1
Reviewed-on: https://go-review.googlesource.com/42145
Run-TryBot: Josh Bleecher Snyder <josharian@gmail.com>
TryBot-Result: Gobot Gobot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@golang.org>
"*cmd/compile/internal/types.Type %L": "",
"*cmd/compile/internal/types.Type %S": "",
"*cmd/compile/internal/types.Type %p": "",
+ "*cmd/compile/internal/types.Type %s": "",
"*cmd/compile/internal/types.Type %v": "",
"*cmd/internal/obj.Addr %v": "",
"*cmd/internal/obj.LSym %v": "",
"cmd/compile/internal/ssa.Location %v": "",
"cmd/compile/internal/ssa.Op %s": "",
"cmd/compile/internal/ssa.Op %v": "",
- "cmd/compile/internal/ssa.Type %s": "",
- "cmd/compile/internal/ssa.Type %v": "",
"cmd/compile/internal/ssa.ValAndOff %s": "",
"cmd/compile/internal/ssa.rbrank %d": "",
"cmd/compile/internal/ssa.regMask %d": "",
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/x86"
)
}
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type) obj.As {
+func loadByType(t *types.Type) obj.As {
// Avoid partial register write
if !t.IsFloat() && t.Size() <= 2 {
if t.Size() == 1 {
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type) obj.As {
+func storeByType(t *types.Type) obj.As {
width := t.Size()
if t.IsFloat() {
switch width {
}
// moveByType returns the reg->reg move instruction of the given type.
-func moveByType(t ssa.Type) obj.As {
+func moveByType(t *types.Type) obj.As {
if t.IsFloat() {
// Moving the whole sse2 register is faster
// than moving just the correct low portion of it.
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/arm"
)
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type) obj.As {
+func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type) obj.As {
+func storeByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/arm64"
)
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type) obj.As {
+func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type) obj.As {
+func storeByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
if t == nil {
return "<T>"
}
+ if t.Etype == types.TSSA {
+ return t.Extra.(string)
+ }
+ if t.Etype == types.TTUPLE {
+ return t.FieldType(0).String() + "," + t.FieldType(1).String()
+ }
if depth > 100 {
return "<...>"
import (
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/src"
"container/heap"
"fmt"
// Generate a numbering for these variables.
s.varnum = map[*Node]int32{}
var vars []*Node
- var vartypes []ssa.Type
+ var vartypes []*types.Type
for _, b := range s.f.Blocks {
for _, v := range b.Values {
if v.Op != ssa.OpFwdRef {
s.queued = newSparseSet(s.f.NumBlocks())
s.hasPhi = newSparseSet(s.f.NumBlocks())
s.hasDef = newSparseSet(s.f.NumBlocks())
- s.placeholder = s.s.entryNewValue0(ssa.OpUnknown, ssa.TypeInvalid)
+ s.placeholder = s.s.entryNewValue0(ssa.OpUnknown, types.TypeInvalid)
// Generate phi ops for each variable.
for n := range vartypes {
}
}
-func (s *phiState) insertVarPhis(n int, var_ *Node, defs []*ssa.Block, typ ssa.Type) {
+func (s *phiState) insertVarPhis(n int, var_ *Node, defs []*ssa.Block, typ *types.Type) {
priq := &s.priq
q := s.q
queued := s.queued
}
// lookupVarOutgoing finds the variable's value at the end of block b.
-func (s *simplePhiState) lookupVarOutgoing(b *ssa.Block, t ssa.Type, var_ *Node, line src.XPos) *ssa.Value {
+func (s *simplePhiState) lookupVarOutgoing(b *ssa.Block, t *types.Type, var_ *Node, line src.XPos) *ssa.Value {
for {
if v := s.defvars[b.ID][var_]; v != nil {
return v
} else {
aux = &ssa.ArgSymbol{Node: v}
}
- b.NewValue0IA(src.NoXPos, ssa.OpClobber, ssa.TypeVoid, offset, aux)
+ b.NewValue0IA(src.NoXPos, ssa.OpClobber, types.TypeVoid, offset, aux)
}
func (lv *Liveness) avarinitanyall(b *ssa.Block, any, all bvec) {
s.labels = map[string]*ssaLabel{}
s.labeledNodes = map[*Node]*ssaLabel{}
s.fwdVars = map[*Node]*ssa.Value{}
- s.startmem = s.entryNewValue0(ssa.OpInitMem, ssa.TypeMem)
+ s.startmem = s.entryNewValue0(ssa.OpInitMem, types.TypeMem)
s.sp = s.entryNewValue0(ssa.OpSP, types.Types[TUINTPTR]) // TODO: use generic pointer type (unsafe.Pointer?) instead
s.sb = s.entryNewValue0(ssa.OpSB, types.Types[TUINTPTR])
}
// newValue0 adds a new value with no arguments to the current block.
-func (s *state) newValue0(op ssa.Op, t ssa.Type) *ssa.Value {
+func (s *state) newValue0(op ssa.Op, t *types.Type) *ssa.Value {
return s.curBlock.NewValue0(s.peekPos(), op, t)
}
// newValue0A adds a new value with no arguments and an aux value to the current block.
-func (s *state) newValue0A(op ssa.Op, t ssa.Type, aux interface{}) *ssa.Value {
+func (s *state) newValue0A(op ssa.Op, t *types.Type, aux interface{}) *ssa.Value {
return s.curBlock.NewValue0A(s.peekPos(), op, t, aux)
}
// newValue0I adds a new value with no arguments and an auxint value to the current block.
-func (s *state) newValue0I(op ssa.Op, t ssa.Type, auxint int64) *ssa.Value {
+func (s *state) newValue0I(op ssa.Op, t *types.Type, auxint int64) *ssa.Value {
return s.curBlock.NewValue0I(s.peekPos(), op, t, auxint)
}
// newValue1 adds a new value with one argument to the current block.
-func (s *state) newValue1(op ssa.Op, t ssa.Type, arg *ssa.Value) *ssa.Value {
+func (s *state) newValue1(op ssa.Op, t *types.Type, arg *ssa.Value) *ssa.Value {
return s.curBlock.NewValue1(s.peekPos(), op, t, arg)
}
// newValue1A adds a new value with one argument and an aux value to the current block.
-func (s *state) newValue1A(op ssa.Op, t ssa.Type, aux interface{}, arg *ssa.Value) *ssa.Value {
+func (s *state) newValue1A(op ssa.Op, t *types.Type, aux interface{}, arg *ssa.Value) *ssa.Value {
return s.curBlock.NewValue1A(s.peekPos(), op, t, aux, arg)
}
// newValue1I adds a new value with one argument and an auxint value to the current block.
-func (s *state) newValue1I(op ssa.Op, t ssa.Type, aux int64, arg *ssa.Value) *ssa.Value {
+func (s *state) newValue1I(op ssa.Op, t *types.Type, aux int64, arg *ssa.Value) *ssa.Value {
return s.curBlock.NewValue1I(s.peekPos(), op, t, aux, arg)
}
// newValue2 adds a new value with two arguments to the current block.
-func (s *state) newValue2(op ssa.Op, t ssa.Type, arg0, arg1 *ssa.Value) *ssa.Value {
+func (s *state) newValue2(op ssa.Op, t *types.Type, arg0, arg1 *ssa.Value) *ssa.Value {
return s.curBlock.NewValue2(s.peekPos(), op, t, arg0, arg1)
}
// newValue2I adds a new value with two arguments and an auxint value to the current block.
-func (s *state) newValue2I(op ssa.Op, t ssa.Type, aux int64, arg0, arg1 *ssa.Value) *ssa.Value {
+func (s *state) newValue2I(op ssa.Op, t *types.Type, aux int64, arg0, arg1 *ssa.Value) *ssa.Value {
return s.curBlock.NewValue2I(s.peekPos(), op, t, aux, arg0, arg1)
}
// newValue3 adds a new value with three arguments to the current block.
-func (s *state) newValue3(op ssa.Op, t ssa.Type, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
+func (s *state) newValue3(op ssa.Op, t *types.Type, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
return s.curBlock.NewValue3(s.peekPos(), op, t, arg0, arg1, arg2)
}
// newValue3I adds a new value with three arguments and an auxint value to the current block.
-func (s *state) newValue3I(op ssa.Op, t ssa.Type, aux int64, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
+func (s *state) newValue3I(op ssa.Op, t *types.Type, aux int64, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
return s.curBlock.NewValue3I(s.peekPos(), op, t, aux, arg0, arg1, arg2)
}
// newValue3A adds a new value with three arguments and an aux value to the current block.
-func (s *state) newValue3A(op ssa.Op, t ssa.Type, aux interface{}, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
+func (s *state) newValue3A(op ssa.Op, t *types.Type, aux interface{}, arg0, arg1, arg2 *ssa.Value) *ssa.Value {
return s.curBlock.NewValue3A(s.peekPos(), op, t, aux, arg0, arg1, arg2)
}
// newValue4 adds a new value with four arguments to the current block.
-func (s *state) newValue4(op ssa.Op, t ssa.Type, arg0, arg1, arg2, arg3 *ssa.Value) *ssa.Value {
+func (s *state) newValue4(op ssa.Op, t *types.Type, arg0, arg1, arg2, arg3 *ssa.Value) *ssa.Value {
return s.curBlock.NewValue4(s.peekPos(), op, t, arg0, arg1, arg2, arg3)
}
// entryNewValue0 adds a new value with no arguments to the entry block.
-func (s *state) entryNewValue0(op ssa.Op, t ssa.Type) *ssa.Value {
+func (s *state) entryNewValue0(op ssa.Op, t *types.Type) *ssa.Value {
return s.f.Entry.NewValue0(s.peekPos(), op, t)
}
// entryNewValue0A adds a new value with no arguments and an aux value to the entry block.
-func (s *state) entryNewValue0A(op ssa.Op, t ssa.Type, aux interface{}) *ssa.Value {
+func (s *state) entryNewValue0A(op ssa.Op, t *types.Type, aux interface{}) *ssa.Value {
return s.f.Entry.NewValue0A(s.peekPos(), op, t, aux)
}
// entryNewValue0I adds a new value with no arguments and an auxint value to the entry block.
-func (s *state) entryNewValue0I(op ssa.Op, t ssa.Type, auxint int64) *ssa.Value {
+func (s *state) entryNewValue0I(op ssa.Op, t *types.Type, auxint int64) *ssa.Value {
return s.f.Entry.NewValue0I(s.peekPos(), op, t, auxint)
}
// entryNewValue1 adds a new value with one argument to the entry block.
-func (s *state) entryNewValue1(op ssa.Op, t ssa.Type, arg *ssa.Value) *ssa.Value {
+func (s *state) entryNewValue1(op ssa.Op, t *types.Type, arg *ssa.Value) *ssa.Value {
return s.f.Entry.NewValue1(s.peekPos(), op, t, arg)
}
// entryNewValue1 adds a new value with one argument and an auxint value to the entry block.
-func (s *state) entryNewValue1I(op ssa.Op, t ssa.Type, auxint int64, arg *ssa.Value) *ssa.Value {
+func (s *state) entryNewValue1I(op ssa.Op, t *types.Type, auxint int64, arg *ssa.Value) *ssa.Value {
return s.f.Entry.NewValue1I(s.peekPos(), op, t, auxint, arg)
}
// entryNewValue1A adds a new value with one argument and an aux value to the entry block.
-func (s *state) entryNewValue1A(op ssa.Op, t ssa.Type, aux interface{}, arg *ssa.Value) *ssa.Value {
+func (s *state) entryNewValue1A(op ssa.Op, t *types.Type, aux interface{}, arg *ssa.Value) *ssa.Value {
return s.f.Entry.NewValue1A(s.peekPos(), op, t, aux, arg)
}
// entryNewValue2 adds a new value with two arguments to the entry block.
-func (s *state) entryNewValue2(op ssa.Op, t ssa.Type, arg0, arg1 *ssa.Value) *ssa.Value {
+func (s *state) entryNewValue2(op ssa.Op, t *types.Type, arg0, arg1 *ssa.Value) *ssa.Value {
return s.f.Entry.NewValue2(s.peekPos(), op, t, arg0, arg1)
}
// const* routines add a new const value to the entry block.
-func (s *state) constSlice(t ssa.Type) *ssa.Value { return s.f.ConstSlice(s.peekPos(), t) }
-func (s *state) constInterface(t ssa.Type) *ssa.Value { return s.f.ConstInterface(s.peekPos(), t) }
-func (s *state) constNil(t ssa.Type) *ssa.Value { return s.f.ConstNil(s.peekPos(), t) }
-func (s *state) constEmptyString(t ssa.Type) *ssa.Value { return s.f.ConstEmptyString(s.peekPos(), t) }
+func (s *state) constSlice(t *types.Type) *ssa.Value {
+ return s.f.ConstSlice(s.peekPos(), t)
+}
+func (s *state) constInterface(t *types.Type) *ssa.Value {
+ return s.f.ConstInterface(s.peekPos(), t)
+}
+func (s *state) constNil(t *types.Type) *ssa.Value { return s.f.ConstNil(s.peekPos(), t) }
+func (s *state) constEmptyString(t *types.Type) *ssa.Value {
+ return s.f.ConstEmptyString(s.peekPos(), t)
+}
func (s *state) constBool(c bool) *ssa.Value {
return s.f.ConstBool(s.peekPos(), types.Types[TBOOL], c)
}
-func (s *state) constInt8(t ssa.Type, c int8) *ssa.Value {
+func (s *state) constInt8(t *types.Type, c int8) *ssa.Value {
return s.f.ConstInt8(s.peekPos(), t, c)
}
-func (s *state) constInt16(t ssa.Type, c int16) *ssa.Value {
+func (s *state) constInt16(t *types.Type, c int16) *ssa.Value {
return s.f.ConstInt16(s.peekPos(), t, c)
}
-func (s *state) constInt32(t ssa.Type, c int32) *ssa.Value {
+func (s *state) constInt32(t *types.Type, c int32) *ssa.Value {
return s.f.ConstInt32(s.peekPos(), t, c)
}
-func (s *state) constInt64(t ssa.Type, c int64) *ssa.Value {
+func (s *state) constInt64(t *types.Type, c int64) *ssa.Value {
return s.f.ConstInt64(s.peekPos(), t, c)
}
-func (s *state) constFloat32(t ssa.Type, c float64) *ssa.Value {
+func (s *state) constFloat32(t *types.Type, c float64) *ssa.Value {
return s.f.ConstFloat32(s.peekPos(), t, c)
}
-func (s *state) constFloat64(t ssa.Type, c float64) *ssa.Value {
+func (s *state) constFloat64(t *types.Type, c float64) *ssa.Value {
return s.f.ConstFloat64(s.peekPos(), t, c)
}
-func (s *state) constInt(t ssa.Type, c int64) *ssa.Value {
+func (s *state) constInt(t *types.Type, c int64) *ssa.Value {
if s.config.PtrSize == 8 {
return s.constInt64(t, c)
}
}
return s.constInt32(t, int32(c))
}
-func (s *state) constOffPtrSP(t ssa.Type, c int64) *ssa.Value {
+func (s *state) constOffPtrSP(t *types.Type, c int64) *ssa.Value {
return s.f.ConstOffPtrSP(s.peekPos(), t, c, s.sp)
}
// varkill in the store chain is enough to keep it correctly ordered
// with respect to call ops.
if !s.canSSA(n.Left) {
- s.vars[&memVar] = s.newValue1A(ssa.OpVarKill, ssa.TypeMem, n.Left, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarKill, types.TypeMem, n.Left, s.mem())
}
case OVARLIVE:
if !n.Left.Addrtaken() {
s.Fatalf("VARLIVE variable %v must have Addrtaken set", n.Left)
}
- s.vars[&memVar] = s.newValue1A(ssa.OpVarLive, ssa.TypeMem, n.Left, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarLive, types.TypeMem, n.Left, s.mem())
case OCHECKNIL:
p := s.expr(n.Left)
for _, n := range s.returns {
addr := s.decladdrs[n]
val := s.variable(n, n.Type)
- s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, n, s.mem())
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, n.Type, addr, val, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, types.TypeMem, n, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, n.Type, addr, val, s.mem())
// TODO: if val is ever spilled, we'd like to use the
// PPARAMOUT slot for spilling it. That won't happen
// currently.
if inplace {
if sn.Op == ONAME {
// Tell liveness we're about to build a new slice
- s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, sn, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, types.TypeMem, sn, s.mem())
}
capaddr := s.newValue1I(ssa.OpOffPtr, s.f.Config.Types.IntPtr, int64(array_cap), addr)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TINT], capaddr, r[2], s.mem())
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, pt, addr, r[0], s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TINT], capaddr, r[2], s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, pt, addr, r[0], s.mem())
// load the value we just stored to avoid having to spill it
s.vars[&ptrVar] = s.newValue2(ssa.OpLoad, pt, addr, s.mem())
s.vars[&lenVar] = r[1] // avoid a spill in the fast path
l = s.variable(&lenVar, types.Types[TINT]) // generates phi for len
nl = s.newValue2(s.ssaOp(OADD, types.Types[TINT]), types.Types[TINT], l, s.constInt(types.Types[TINT], nargs))
lenaddr := s.newValue1I(ssa.OpOffPtr, s.f.Config.Types.IntPtr, int64(array_nel), addr)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TINT], lenaddr, nl, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TINT], lenaddr, nl, s.mem())
}
// Evaluate args
if arg.store {
s.storeType(et, addr, arg.v, 0)
} else {
- store := s.newValue3I(ssa.OpMove, ssa.TypeMem, et.Size(), addr, arg.v, s.mem())
+ store := s.newValue3I(ssa.OpMove, types.TypeMem, et.Size(), addr, arg.v, s.mem())
store.Aux = et
s.vars[&memVar] = store
}
// Left is not ssa-able. Compute its address.
addr := s.addr(left, false)
if left.Op == ONAME && skip == 0 {
- s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, left, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, types.TypeMem, left, s.mem())
}
if isReflectHeaderDataField(left) {
// Package unsafe's documentation says storing pointers into
// Treat as a mem->mem move.
var store *ssa.Value
if right == nil {
- store = s.newValue2I(ssa.OpZero, ssa.TypeMem, t.Size(), addr, s.mem())
+ store = s.newValue2I(ssa.OpZero, types.TypeMem, t.Size(), addr, s.mem())
} else {
- store = s.newValue3I(ssa.OpMove, ssa.TypeMem, t.Size(), addr, right, s.mem())
+ store = s.newValue3I(ssa.OpMove, types.TypeMem, t.Size(), addr, right, s.mem())
}
store.Aux = t
s.vars[&memVar] = store
n := t.NumFields()
v := s.entryNewValue0(ssa.StructMakeOp(t.NumFields()), t)
for i := 0; i < n; i++ {
- v.AddArg(s.zeroVal(t.FieldType(i).(*types.Type)))
+ v.AddArg(s.zeroVal(t.FieldType(i)))
}
return v
case t.IsArray():
add("runtime", "KeepAlive",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
data := s.newValue1(ssa.OpIData, s.f.Config.Types.BytePtr, args[0])
- s.vars[&memVar] = s.newValue2(ssa.OpKeepAlive, ssa.TypeMem, data, s.mem())
+ s.vars[&memVar] = s.newValue2(ssa.OpKeepAlive, types.TypeMem, data, s.mem())
return nil
},
all...)
/******** runtime/internal/atomic ********/
addF("runtime/internal/atomic", "Load",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue2(ssa.OpAtomicLoad32, ssa.MakeTuple(types.Types[TUINT32], ssa.TypeMem), args[0], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue2(ssa.OpAtomicLoad32, types.NewTuple(types.Types[TUINT32], types.TypeMem), args[0], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TUINT32], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Load64",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue2(ssa.OpAtomicLoad64, ssa.MakeTuple(types.Types[TUINT64], ssa.TypeMem), args[0], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue2(ssa.OpAtomicLoad64, types.NewTuple(types.Types[TUINT64], types.TypeMem), args[0], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TUINT64], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.PPC64)
addF("runtime/internal/atomic", "Loadp",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue2(ssa.OpAtomicLoadPtr, ssa.MakeTuple(s.f.Config.Types.BytePtr, ssa.TypeMem), args[0], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue2(ssa.OpAtomicLoadPtr, types.NewTuple(s.f.Config.Types.BytePtr, types.TypeMem), args[0], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, s.f.Config.Types.BytePtr, v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Store",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- s.vars[&memVar] = s.newValue3(ssa.OpAtomicStore32, ssa.TypeMem, args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue3(ssa.OpAtomicStore32, types.TypeMem, args[0], args[1], s.mem())
return nil
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Store64",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- s.vars[&memVar] = s.newValue3(ssa.OpAtomicStore64, ssa.TypeMem, args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue3(ssa.OpAtomicStore64, types.TypeMem, args[0], args[1], s.mem())
return nil
},
sys.AMD64, sys.ARM64, sys.S390X, sys.PPC64)
addF("runtime/internal/atomic", "StorepNoWB",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- s.vars[&memVar] = s.newValue3(ssa.OpAtomicStorePtrNoWB, ssa.TypeMem, args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue3(ssa.OpAtomicStorePtrNoWB, types.TypeMem, args[0], args[1], s.mem())
return nil
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS)
addF("runtime/internal/atomic", "Xchg",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue3(ssa.OpAtomicExchange32, ssa.MakeTuple(types.Types[TUINT32], ssa.TypeMem), args[0], args[1], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue3(ssa.OpAtomicExchange32, types.NewTuple(types.Types[TUINT32], types.TypeMem), args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TUINT32], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Xchg64",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue3(ssa.OpAtomicExchange64, ssa.MakeTuple(types.Types[TUINT64], ssa.TypeMem), args[0], args[1], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue3(ssa.OpAtomicExchange64, types.NewTuple(types.Types[TUINT64], types.TypeMem), args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TUINT64], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.PPC64)
addF("runtime/internal/atomic", "Xadd",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue3(ssa.OpAtomicAdd32, ssa.MakeTuple(types.Types[TUINT32], ssa.TypeMem), args[0], args[1], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue3(ssa.OpAtomicAdd32, types.NewTuple(types.Types[TUINT32], types.TypeMem), args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TUINT32], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Xadd64",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue3(ssa.OpAtomicAdd64, ssa.MakeTuple(types.Types[TUINT64], ssa.TypeMem), args[0], args[1], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue3(ssa.OpAtomicAdd64, types.NewTuple(types.Types[TUINT64], types.TypeMem), args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TUINT64], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.PPC64)
addF("runtime/internal/atomic", "Cas",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue4(ssa.OpAtomicCompareAndSwap32, ssa.MakeTuple(types.Types[TBOOL], ssa.TypeMem), args[0], args[1], args[2], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue4(ssa.OpAtomicCompareAndSwap32, types.NewTuple(types.Types[TBOOL], types.TypeMem), args[0], args[1], args[2], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TBOOL], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Cas64",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- v := s.newValue4(ssa.OpAtomicCompareAndSwap64, ssa.MakeTuple(types.Types[TBOOL], ssa.TypeMem), args[0], args[1], args[2], s.mem())
- s.vars[&memVar] = s.newValue1(ssa.OpSelect1, ssa.TypeMem, v)
+ v := s.newValue4(ssa.OpAtomicCompareAndSwap64, types.NewTuple(types.Types[TBOOL], types.TypeMem), args[0], args[1], args[2], s.mem())
+ s.vars[&memVar] = s.newValue1(ssa.OpSelect1, types.TypeMem, v)
return s.newValue1(ssa.OpSelect0, types.Types[TBOOL], v)
},
sys.AMD64, sys.ARM64, sys.S390X, sys.PPC64)
addF("runtime/internal/atomic", "And8",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- s.vars[&memVar] = s.newValue3(ssa.OpAtomicAnd8, ssa.TypeMem, args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue3(ssa.OpAtomicAnd8, types.TypeMem, args[0], args[1], s.mem())
return nil
},
sys.AMD64, sys.ARM64, sys.MIPS, sys.PPC64)
addF("runtime/internal/atomic", "Or8",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- s.vars[&memVar] = s.newValue3(ssa.OpAtomicOr8, ssa.TypeMem, args[0], args[1], s.mem())
+ s.vars[&memVar] = s.newValue3(ssa.OpAtomicOr8, types.TypeMem, args[0], args[1], s.mem())
return nil
},
sys.AMD64, sys.ARM64, sys.MIPS, sys.PPC64)
/******** math/big ********/
add("math/big", "mulWW",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- return s.newValue2(ssa.OpMul64uhilo, ssa.MakeTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1])
+ return s.newValue2(ssa.OpMul64uhilo, types.NewTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1])
},
sys.ArchAMD64)
add("math/big", "divWW",
func(s *state, n *Node, args []*ssa.Value) *ssa.Value {
- return s.newValue3(ssa.OpDiv128u, ssa.MakeTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1], args[2])
+ return s.newValue3(ssa.OpDiv128u, types.NewTuple(types.Types[TUINT64], types.Types[TUINT64]), args[0], args[1], args[2])
},
sys.ArchAMD64)
}
argStart += int64(2 * Widthptr)
}
addr := s.constOffPtrSP(s.f.Config.Types.UintptrPtr, argStart)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TUINTPTR], addr, rcvr, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TUINTPTR], addr, rcvr, s.mem())
}
// Defer/go args
argStart := Ctxt.FixedFrameSize()
argsize := s.constInt32(types.Types[TUINT32], int32(stksize))
addr := s.constOffPtrSP(s.f.Config.Types.UInt32Ptr, argStart)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TUINT32], addr, argsize, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TUINT32], addr, argsize, s.mem())
addr = s.constOffPtrSP(s.f.Config.Types.UintptrPtr, argStart+int64(Widthptr))
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TUINTPTR], addr, closure, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TUINTPTR], addr, closure, s.mem())
stksize += 2 * int64(Widthptr)
}
var call *ssa.Value
switch {
case k == callDefer:
- call = s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, Deferproc, s.mem())
+ call = s.newValue1A(ssa.OpStaticCall, types.TypeMem, Deferproc, s.mem())
case k == callGo:
- call = s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, Newproc, s.mem())
+ call = s.newValue1A(ssa.OpStaticCall, types.TypeMem, Newproc, s.mem())
case closure != nil:
codeptr = s.newValue2(ssa.OpLoad, types.Types[TUINTPTR], closure, s.mem())
- call = s.newValue3(ssa.OpClosureCall, ssa.TypeMem, codeptr, closure, s.mem())
+ call = s.newValue3(ssa.OpClosureCall, types.TypeMem, codeptr, closure, s.mem())
case codeptr != nil:
- call = s.newValue2(ssa.OpInterCall, ssa.TypeMem, codeptr, s.mem())
+ call = s.newValue2(ssa.OpInterCall, types.TypeMem, codeptr, s.mem())
case sym != nil:
- call = s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, sym.Linksym(), s.mem())
+ call = s.newValue1A(ssa.OpStaticCall, types.TypeMem, sym.Linksym(), s.mem())
default:
Fatalf("bad call type %v %v", n.Op, n)
}
if disable_checknil != 0 || s.curfn.Func.NilCheckDisabled() {
return
}
- s.newValue2(ssa.OpNilCheck, ssa.TypeVoid, ptr, s.mem())
+ s.newValue2(ssa.OpNilCheck, types.TypeVoid, ptr, s.mem())
}
// boundsCheck generates bounds checking code. Checks if 0 <= idx < len, branches to exit if not.
off = Rnd(off, t.Alignment())
ptr := s.constOffPtrSP(t.PtrTo(), off)
size := t.Size()
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, t, ptr, arg, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, t, ptr, arg, s.mem())
off += size
}
off = Rnd(off, int64(Widthreg))
// Issue call
- call := s.newValue1A(ssa.OpStaticCall, ssa.TypeMem, fn, s.mem())
+ call := s.newValue1A(ssa.OpStaticCall, types.TypeMem, fn, s.mem())
s.vars[&memVar] = call
if !returns {
func (s *state) storeType(t *types.Type, left, right *ssa.Value, skip skipMask) {
if skip == 0 && (!types.Haspointers(t) || ssa.IsStackAddr(left)) {
// Known to not have write barrier. Store the whole type.
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, t, left, right, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, t, left, right, s.mem())
return
}
func (s *state) storeTypeScalars(t *types.Type, left, right *ssa.Value, skip skipMask) {
switch {
case t.IsBoolean() || t.IsInteger() || t.IsFloat() || t.IsComplex():
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, t, left, right, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, t, left, right, s.mem())
case t.IsPtrShaped():
// no scalar fields.
case t.IsString():
}
len := s.newValue1(ssa.OpStringLen, types.Types[TINT], right)
lenAddr := s.newValue1I(ssa.OpOffPtr, s.f.Config.Types.IntPtr, s.config.PtrSize, left)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TINT], lenAddr, len, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TINT], lenAddr, len, s.mem())
case t.IsSlice():
if skip&skipLen == 0 {
len := s.newValue1(ssa.OpSliceLen, types.Types[TINT], right)
lenAddr := s.newValue1I(ssa.OpOffPtr, s.f.Config.Types.IntPtr, s.config.PtrSize, left)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TINT], lenAddr, len, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TINT], lenAddr, len, s.mem())
}
if skip&skipCap == 0 {
cap := s.newValue1(ssa.OpSliceCap, types.Types[TINT], right)
capAddr := s.newValue1I(ssa.OpOffPtr, s.f.Config.Types.IntPtr, 2*s.config.PtrSize, left)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TINT], capAddr, cap, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TINT], capAddr, cap, s.mem())
}
case t.IsInterface():
// itab field doesn't need a write barrier (even though it is a pointer).
itab := s.newValue1(ssa.OpITab, s.f.Config.Types.BytePtr, right)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, types.Types[TUINTPTR], left, itab, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, types.Types[TUINTPTR], left, itab, s.mem())
case t.IsStruct():
n := t.NumFields()
for i := 0; i < n; i++ {
ft := t.FieldType(i)
addr := s.newValue1I(ssa.OpOffPtr, ft.PtrTo(), t.FieldOff(i), left)
val := s.newValue1I(ssa.OpStructSelect, ft, int64(i), right)
- s.storeTypeScalars(ft.(*types.Type), addr, val, 0)
+ s.storeTypeScalars(ft, addr, val, 0)
}
case t.IsArray() && t.NumElem() == 0:
// nothing
func (s *state) storeTypePtrs(t *types.Type, left, right *ssa.Value) {
switch {
case t.IsPtrShaped():
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, t, left, right, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, t, left, right, s.mem())
case t.IsString():
ptr := s.newValue1(ssa.OpStringPtr, s.f.Config.Types.BytePtr, right)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, s.f.Config.Types.BytePtr, left, ptr, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, s.f.Config.Types.BytePtr, left, ptr, s.mem())
case t.IsSlice():
ptr := s.newValue1(ssa.OpSlicePtr, s.f.Config.Types.BytePtr, right)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, s.f.Config.Types.BytePtr, left, ptr, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, s.f.Config.Types.BytePtr, left, ptr, s.mem())
case t.IsInterface():
// itab field is treated as a scalar.
idata := s.newValue1(ssa.OpIData, s.f.Config.Types.BytePtr, right)
idataAddr := s.newValue1I(ssa.OpOffPtr, s.f.Config.Types.BytePtrPtr, s.config.PtrSize, left)
- s.vars[&memVar] = s.newValue3A(ssa.OpStore, ssa.TypeMem, s.f.Config.Types.BytePtr, idataAddr, idata, s.mem())
+ s.vars[&memVar] = s.newValue3A(ssa.OpStore, types.TypeMem, s.f.Config.Types.BytePtr, idataAddr, idata, s.mem())
case t.IsStruct():
n := t.NumFields()
for i := 0; i < n; i++ {
ft := t.FieldType(i)
- if !types.Haspointers(ft.(*types.Type)) {
+ if !types.Haspointers(ft) {
continue
}
addr := s.newValue1I(ssa.OpOffPtr, ft.PtrTo(), t.FieldOff(i), left)
val := s.newValue1I(ssa.OpStructSelect, ft, int64(i), right)
- s.storeTypePtrs(ft.(*types.Type), addr, val)
+ s.storeTypePtrs(ft, addr, val)
}
case t.IsArray() && t.NumElem() == 0:
// nothing
type u642fcvtTab struct {
geq, cvt2F, and, rsh, or, add ssa.Op
- one func(*state, ssa.Type, int64) *ssa.Value
+ one func(*state, *types.Type, int64) *ssa.Value
}
var u64_f64 u642fcvtTab = u642fcvtTab{
type f2uCvtTab struct {
ltf, cvt2U, subf, or ssa.Op
- floatValue func(*state, ssa.Type, float64) *ssa.Value
- intValue func(*state, ssa.Type, int64) *ssa.Value
+ floatValue func(*state, *types.Type, float64) *ssa.Value
+ intValue func(*state, *types.Type, int64) *ssa.Value
cutoff uint64
}
subf: ssa.OpSub32F,
or: ssa.OpOr32,
floatValue: (*state).constFloat32,
- intValue: func(s *state, t ssa.Type, v int64) *ssa.Value { return s.constInt32(t, int32(v)) },
+ intValue: func(s *state, t *types.Type, v int64) *ssa.Value { return s.constInt32(t, int32(v)) },
cutoff: 2147483648,
}
subf: ssa.OpSub64F,
or: ssa.OpOr32,
floatValue: (*state).constFloat64,
- intValue: func(s *state, t ssa.Type, v int64) *ssa.Value { return s.constInt32(t, int32(v)) },
+ intValue: func(s *state, t *types.Type, v int64) *ssa.Value { return s.constInt32(t, int32(v)) },
cutoff: 2147483648,
}
// TODO: get rid of some of these temporaries.
tmp = tempAt(n.Pos, s.curfn, n.Type)
addr = s.addr(tmp, false)
- s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, ssa.TypeMem, tmp, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarDef, types.TypeMem, tmp, s.mem())
}
cond := s.newValue2(ssa.OpEqPtr, types.Types[TBOOL], itab, targetITab)
}
} else {
p := s.newValue1(ssa.OpIData, types.NewPtr(n.Type), iface)
- store := s.newValue3I(ssa.OpMove, ssa.TypeMem, n.Type.Size(), addr, p, s.mem())
+ store := s.newValue3I(ssa.OpMove, types.TypeMem, n.Type.Size(), addr, p, s.mem())
store.Aux = n.Type
s.vars[&memVar] = store
}
if tmp == nil {
s.vars[valVar] = s.zeroVal(n.Type)
} else {
- store := s.newValue2I(ssa.OpZero, ssa.TypeMem, n.Type.Size(), addr, s.mem())
+ store := s.newValue2I(ssa.OpZero, types.TypeMem, n.Type.Size(), addr, s.mem())
store.Aux = n.Type
s.vars[&memVar] = store
}
delete(s.vars, valVar)
} else {
res = s.newValue2(ssa.OpLoad, n.Type, addr, s.mem())
- s.vars[&memVar] = s.newValue1A(ssa.OpVarKill, ssa.TypeMem, tmp, s.mem())
+ s.vars[&memVar] = s.newValue1A(ssa.OpVarKill, types.TypeMem, tmp, s.mem())
}
resok = s.variable(&okVar, types.Types[TBOOL])
delete(s.vars, &okVar)
}
// variable returns the value of a variable at the current location.
-func (s *state) variable(name *Node, t ssa.Type) *ssa.Value {
+func (s *state) variable(name *Node, t *types.Type) *ssa.Value {
v := s.vars[name]
if v != nil {
return v
}
func (s *state) mem() *ssa.Value {
- return s.variable(&memVar, ssa.TypeMem)
+ return s.variable(&memVar, types.TypeMem)
}
func (s *state) addNamedValue(n *Node, v *ssa.Value) {
return aux
}
-func (e *ssafn) Auto(pos src.XPos, t ssa.Type) ssa.GCNode {
- n := tempAt(pos, e.curfn, t.(*types.Type)) // Note: adds new auto to e.curfn.Func.Dcl list
+func (e *ssafn) Auto(pos src.XPos, t *types.Type) ssa.GCNode {
+ n := tempAt(pos, e.curfn, t) // Note: adds new auto to e.curfn.Func.Dcl list
return n
}
func (e *ssafn) SplitSlice(name ssa.LocalSlot) (ssa.LocalSlot, ssa.LocalSlot, ssa.LocalSlot) {
n := name.N.(*Node)
- ptrType := types.NewPtr(name.Type.ElemType().(*types.Type))
+ ptrType := types.NewPtr(name.Type.ElemType())
lenType := types.Types[TINT]
if n.Class() == PAUTO && !n.Addrtaken() {
// Split this slice up into three separate variables.
// namedAuto returns a new AUTO variable with the given name and type.
// These are exposed to the debugger.
-func (e *ssafn) namedAuto(name string, typ ssa.Type, pos src.XPos) ssa.GCNode {
- t := typ.(*types.Type)
+func (e *ssafn) namedAuto(name string, typ *types.Type, pos src.XPos) ssa.GCNode {
+ t := typ
s := &types.Sym{Name: name, Pkg: localpkg}
n := new(Node)
return n
}
-func (e *ssafn) CanSSA(t ssa.Type) bool {
- return canSSAType(t.(*types.Type))
+func (e *ssafn) CanSSA(t *types.Type) bool {
+ return canSSAType(t)
}
func (e *ssafn) Line(pos src.XPos) string {
return nil
}
-func (n *Node) Typ() ssa.Type {
+func (n *Node) Typ() *types.Type {
return n.Type
}
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/mips"
)
}
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type, r int16) obj.As {
+func loadByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type, r int16) obj.As {
+func storeByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/mips"
)
}
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type, r int16) obj.As {
+func loadByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type, r int16) obj.As {
+func storeByType(t *types.Type, r int16) obj.As {
if isFPreg(r) {
if t.Size() == 4 { // float32 or int32
return mips.AMOVF
import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/ppc64"
"math"
}
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type) obj.As {
+func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type) obj.As {
+func storeByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/s390x"
)
}
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type) obj.As {
+func loadByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4:
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type) obj.As {
+func storeByType(t *types.Type) obj.As {
width := t.Size()
if t.IsFloat() {
switch width {
}
// moveByType returns the reg->reg move instruction of the given type.
-func moveByType(t ssa.Type) obj.As {
+func moveByType(t *types.Type) obj.As {
if t.IsFloat() {
return s390x.AFMOVD
} else {
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/objabi"
"cmd/internal/src"
)
type Types struct {
- Bool Type
- Int8 Type
- Int16 Type
- Int32 Type
- Int64 Type
- UInt8 Type
- UInt16 Type
- UInt32 Type
- UInt64 Type
- Int Type
- Float32 Type
- Float64 Type
- Uintptr Type
- String Type
- BytePtr Type // TODO: use unsafe.Pointer instead?
- Int32Ptr Type
- UInt32Ptr Type
- IntPtr Type
- UintptrPtr Type
- Float32Ptr Type
- Float64Ptr Type
- BytePtrPtr Type
+ Bool *types.Type
+ Int8 *types.Type
+ Int16 *types.Type
+ Int32 *types.Type
+ Int64 *types.Type
+ UInt8 *types.Type
+ UInt16 *types.Type
+ UInt32 *types.Type
+ UInt64 *types.Type
+ Int *types.Type
+ Float32 *types.Type
+ Float64 *types.Type
+ Uintptr *types.Type
+ String *types.Type
+ BytePtr *types.Type // TODO: use unsafe.Pointer instead?
+ Int32Ptr *types.Type
+ UInt32Ptr *types.Type
+ IntPtr *types.Type
+ UintptrPtr *types.Type
+ Float32Ptr *types.Type
+ Float64Ptr *types.Type
+ BytePtrPtr *types.Type
}
type Logger interface {
}
type Frontend interface {
- CanSSA(t Type) bool
+ CanSSA(t *types.Type) bool
Logger
// Auto returns a Node for an auto variable of the given type.
// The SSA compiler uses this function to allocate space for spills.
- Auto(src.XPos, Type) GCNode
+ Auto(src.XPos, *types.Type) GCNode
// Given the name for a compound type, returns the name we should use
// for the parts of that compound type.
// interface used to hold *gc.Node. We'd use *gc.Node directly but
// that would lead to an import cycle.
type GCNode interface {
- Typ() Type
+ Typ() *types.Type
String() string
}
package ssa
import (
+ "cmd/compile/internal/types"
"fmt"
"testing"
)
c := testConfig(b)
values := make([]interface{}, 0, n+2)
- values = append(values, Valu("mem", OpInitMem, TypeMem, 0, nil))
+ values = append(values, Valu("mem", OpInitMem, types.TypeMem, 0, nil))
last := "mem"
for i := 0; i < n; i++ {
name := fmt.Sprintf("copy%d", i)
- values = append(values, Valu(name, OpCopy, TypeMem, 0, nil, last))
+ values = append(values, Valu(name, OpCopy, types.TypeMem, 0, nil, last))
last = name
}
values = append(values, Exit(last))
package ssa
import (
+ "cmd/compile/internal/types"
"fmt"
"sort"
)
j := 1
for ; j < len(a); j++ {
w := a[j]
- if cmpVal(v, w, auxIDs) != CMPeq {
+ if cmpVal(v, w, auxIDs) != types.CMPeq {
break
}
}
return partition
}
-func lt2Cmp(isLt bool) Cmp {
+func lt2Cmp(isLt bool) types.Cmp {
if isLt {
- return CMPlt
+ return types.CMPlt
}
- return CMPgt
+ return types.CMPgt
}
type auxmap map[interface{}]int32
-func cmpVal(v, w *Value, auxIDs auxmap) Cmp {
+func cmpVal(v, w *Value, auxIDs auxmap) types.Cmp {
// Try to order these comparison by cost (cheaper first)
if v.Op != w.Op {
return lt2Cmp(v.Op < w.Op)
return lt2Cmp(v.ID < w.ID)
}
- if tc := v.Type.Compare(w.Type); tc != CMPeq {
+ if tc := v.Type.Compare(w.Type); tc != types.CMPeq {
return tc
}
if v.Aux != w.Aux {
if v.Aux == nil {
- return CMPlt
+ return types.CMPlt
}
if w.Aux == nil {
- return CMPgt
+ return types.CMPgt
}
return lt2Cmp(auxIDs[v.Aux] < auxIDs[w.Aux])
}
- return CMPeq
+ return types.CMPeq
}
// Sort values to make the initial partition.
func (sv sortvalues) Less(i, j int) bool {
v := sv.a[i]
w := sv.a[j]
- if cmp := cmpVal(v, w, sv.auxIDs); cmp != CMPeq {
- return cmp == CMPlt
+ if cmp := cmpVal(v, w, sv.auxIDs); cmp != types.CMPeq {
+ return cmp == types.CMPlt
}
// Sort by value ID last to keep the sort result deterministic.
package ssa
-import "testing"
+import (
+ "cmd/compile/internal/types"
+ "testing"
+)
type tstAux struct {
s string
// them in an order that triggers the bug
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sp", OpSP, TypeBytePtr, 0, nil),
- Valu("r7", OpAdd64, TypeInt64, 0, nil, "arg3", "arg1"),
- Valu("r1", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
- Valu("arg1", OpArg, TypeInt64, 0, arg1Aux),
- Valu("arg2", OpArg, TypeInt64, 0, arg2Aux),
- Valu("arg3", OpArg, TypeInt64, 0, arg3Aux),
- Valu("r9", OpAdd64, TypeInt64, 0, nil, "r7", "r8"),
- Valu("r4", OpAdd64, TypeInt64, 0, nil, "r1", "r2"),
- Valu("r8", OpAdd64, TypeInt64, 0, nil, "arg3", "arg2"),
- Valu("r2", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
- Valu("raddr", OpAddr, TypeInt64Ptr, 0, nil, "sp"),
- Valu("raddrdef", OpVarDef, TypeMem, 0, nil, "start"),
- Valu("r6", OpAdd64, TypeInt64, 0, nil, "r4", "r5"),
- Valu("r3", OpAdd64, TypeInt64, 0, nil, "arg1", "arg2"),
- Valu("r5", OpAdd64, TypeInt64, 0, nil, "r2", "r3"),
- Valu("r10", OpAdd64, TypeInt64, 0, nil, "r6", "r9"),
- Valu("rstore", OpStore, TypeMem, 0, TypeInt64, "raddr", "r10", "raddrdef"),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sp", OpSP, c.config.Types.BytePtr, 0, nil),
+ Valu("r7", OpAdd64, c.config.Types.Int64, 0, nil, "arg3", "arg1"),
+ Valu("r1", OpAdd64, c.config.Types.Int64, 0, nil, "arg1", "arg2"),
+ Valu("arg1", OpArg, c.config.Types.Int64, 0, arg1Aux),
+ Valu("arg2", OpArg, c.config.Types.Int64, 0, arg2Aux),
+ Valu("arg3", OpArg, c.config.Types.Int64, 0, arg3Aux),
+ Valu("r9", OpAdd64, c.config.Types.Int64, 0, nil, "r7", "r8"),
+ Valu("r4", OpAdd64, c.config.Types.Int64, 0, nil, "r1", "r2"),
+ Valu("r8", OpAdd64, c.config.Types.Int64, 0, nil, "arg3", "arg2"),
+ Valu("r2", OpAdd64, c.config.Types.Int64, 0, nil, "arg1", "arg2"),
+ Valu("raddr", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sp"),
+ Valu("raddrdef", OpVarDef, types.TypeMem, 0, nil, "start"),
+ Valu("r6", OpAdd64, c.config.Types.Int64, 0, nil, "r4", "r5"),
+ Valu("r3", OpAdd64, c.config.Types.Int64, 0, nil, "arg1", "arg2"),
+ Valu("r5", OpAdd64, c.config.Types.Int64, 0, nil, "r2", "r3"),
+ Valu("r10", OpAdd64, c.config.Types.Int64, 0, nil, "r6", "r9"),
+ Valu("rstore", OpStore, types.TypeMem, 0, c.config.Types.Int64, "raddr", "r10", "raddrdef"),
Goto("exit")),
Bloc("exit",
Exit("rstore")))
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sp", OpSP, TypeBytePtr, 0, nil),
- Valu("sb1", OpSB, TypeBytePtr, 0, nil),
- Valu("sb2", OpSB, TypeBytePtr, 0, nil),
- Valu("addr1", OpAddr, TypeInt64Ptr, 0, nil, "sb1"),
- Valu("addr2", OpAddr, TypeInt64Ptr, 0, nil, "sb2"),
- Valu("a1ld", OpLoad, TypeInt64, 0, nil, "addr1", "start"),
- Valu("a2ld", OpLoad, TypeInt64, 0, nil, "addr2", "start"),
- Valu("c1", OpConst64, TypeInt64, 1, nil),
- Valu("r1", OpAdd64, TypeInt64, 0, nil, "a1ld", "c1"),
- Valu("c2", OpConst64, TypeInt64, 1, nil),
- Valu("r2", OpAdd64, TypeInt64, 0, nil, "a2ld", "c2"),
- Valu("r3", OpAdd64, TypeInt64, 0, nil, "r1", "r2"),
- Valu("raddr", OpAddr, TypeInt64Ptr, 0, nil, "sp"),
- Valu("raddrdef", OpVarDef, TypeMem, 0, nil, "start"),
- Valu("rstore", OpStore, TypeMem, 0, TypeInt64, "raddr", "r3", "raddrdef"),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sp", OpSP, c.config.Types.BytePtr, 0, nil),
+ Valu("sb1", OpSB, c.config.Types.BytePtr, 0, nil),
+ Valu("sb2", OpSB, c.config.Types.BytePtr, 0, nil),
+ Valu("addr1", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sb1"),
+ Valu("addr2", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sb2"),
+ Valu("a1ld", OpLoad, c.config.Types.Int64, 0, nil, "addr1", "start"),
+ Valu("a2ld", OpLoad, c.config.Types.Int64, 0, nil, "addr2", "start"),
+ Valu("c1", OpConst64, c.config.Types.Int64, 1, nil),
+ Valu("r1", OpAdd64, c.config.Types.Int64, 0, nil, "a1ld", "c1"),
+ Valu("c2", OpConst64, c.config.Types.Int64, 1, nil),
+ Valu("r2", OpAdd64, c.config.Types.Int64, 0, nil, "a2ld", "c2"),
+ Valu("r3", OpAdd64, c.config.Types.Int64, 0, nil, "r1", "r2"),
+ Valu("raddr", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "sp"),
+ Valu("raddrdef", OpVarDef, types.TypeMem, 0, nil, "start"),
+ Valu("rstore", OpStore, types.TypeMem, 0, c.config.Types.Int64, "raddr", "r3", "raddrdef"),
Goto("exit")),
Bloc("exit",
Exit("rstore")))
package ssa
import (
+ "cmd/compile/internal/types"
"fmt"
"strconv"
"testing"
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem")),
// dead loop
Bloc("deadblock",
// dead value in dead block
- Valu("deadval", OpConstBool, TypeBool, 1, nil),
+ Valu("deadval", OpConstBool, c.config.Types.Bool, 1, nil),
If("deadval", "deadblock", "exit")))
CheckFunc(fun.f)
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("deadval", OpConst64, TypeInt64, 37, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("deadval", OpConst64, c.config.Types.Int64, 37, nil),
Goto("exit")),
Bloc("exit",
Exit("mem")))
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("cond", OpConstBool, TypeBool, 0, nil),
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("cond", OpConstBool, c.config.Types.Bool, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
If("cond", "then", "else")),
Bloc("then",
Goto("exit")),
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("cond", OpConstBool, TypeBool, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("cond", OpConstBool, c.config.Types.Bool, 0, nil),
If("cond", "b2", "b4")),
Bloc("b2",
If("cond", "b3", "b4")),
blocks := make([]bloc, 0, n+2)
blocks = append(blocks,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")))
blocks = append(blocks, Bloc("exit", Exit("mem")))
for i := 0; i < n; i++ {
package ssa
-import "cmd/internal/src"
+import (
+ "cmd/compile/internal/types"
+ "cmd/internal/src"
+)
// dse does dead-store elimination on the Function.
// Dead stores are those which are unconditionally followed by
if v.Op == OpStore || v.Op == OpZero {
var sz int64
if v.Op == OpStore {
- sz = v.Aux.(Type).Size()
+ sz = v.Aux.(*types.Type).Size()
} else { // OpZero
sz = v.AuxInt
}
package ssa
-import "testing"
+import (
+ "cmd/compile/internal/types"
+ "testing"
+)
func TestDeadStore(t *testing.T) {
c := testConfig(t)
- elemType := &TypeImpl{Size_: 1, Name: "testtype"}
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr", Elem_: elemType} // dummy for testing
+ ptrType := c.config.Types.BytePtr
+ t.Logf("PTRTYPE %v", ptrType)
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr1", OpAddr, ptrType, 0, nil, "sb"),
Valu("addr2", OpAddr, ptrType, 0, nil, "sb"),
Valu("addr3", OpAddr, ptrType, 0, nil, "sb"),
- Valu("zero1", OpZero, TypeMem, 1, TypeBool, "addr3", "start"),
- Valu("store1", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "zero1"),
- Valu("store2", OpStore, TypeMem, 0, TypeBool, "addr2", "v", "store1"),
- Valu("store3", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "store2"),
- Valu("store4", OpStore, TypeMem, 0, TypeBool, "addr3", "v", "store3"),
+ Valu("zero1", OpZero, types.TypeMem, 1, c.config.Types.Bool, "addr3", "start"),
+ Valu("store1", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "zero1"),
+ Valu("store2", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr2", "v", "store1"),
+ Valu("store3", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "store2"),
+ Valu("store4", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr3", "v", "store3"),
Goto("exit")),
Bloc("exit",
Exit("store3")))
func TestDeadStorePhi(t *testing.T) {
// make sure we don't get into an infinite loop with phi values.
c := testConfig(t)
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr", OpAddr, ptrType, 0, nil, "sb"),
Goto("loop")),
Bloc("loop",
- Valu("phi", OpPhi, TypeMem, 0, nil, "start", "store"),
- Valu("store", OpStore, TypeMem, 0, TypeBool, "addr", "v", "phi"),
+ Valu("phi", OpPhi, types.TypeMem, 0, nil, "start", "store"),
+ Valu("store", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr", "v", "phi"),
If("v", "loop", "exit")),
Bloc("exit",
Exit("store")))
// types of the address fields are identical (where identicalness is
// decided by the CSE pass).
c := testConfig(t)
- t1 := &TypeImpl{Size_: 8, Ptr: true, Name: "t1"}
- t2 := &TypeImpl{Size_: 4, Ptr: true, Name: "t2"}
+ t1 := c.config.Types.UInt64.PtrTo()
+ t2 := c.config.Types.UInt32.PtrTo()
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr1", OpAddr, t1, 0, nil, "sb"),
Valu("addr2", OpAddr, t2, 0, nil, "sb"),
- Valu("store1", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "start"),
- Valu("store2", OpStore, TypeMem, 0, TypeBool, "addr2", "v", "store1"),
+ Valu("store1", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "start"),
+ Valu("store2", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr2", "v", "store1"),
Goto("exit")),
Bloc("exit",
Exit("store2")))
// covers the case of two different types, but unsafe pointer casting
// can get to a point where the size is changed but type unchanged.
c := testConfig(t)
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ ptrType := c.config.Types.UInt64.PtrTo()
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("v", OpConstBool, TypeBool, 1, nil),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("v", OpConstBool, c.config.Types.Bool, 1, nil),
Valu("addr1", OpAddr, ptrType, 0, nil, "sb"),
- Valu("store1", OpStore, TypeMem, 0, TypeInt64, "addr1", "v", "start"), // store 8 bytes
- Valu("store2", OpStore, TypeMem, 0, TypeBool, "addr1", "v", "store1"), // store 1 byte
+ Valu("store1", OpStore, types.TypeMem, 0, c.config.Types.Int64, "addr1", "v", "start"), // store 8 bytes
+ Valu("store2", OpStore, types.TypeMem, 0, c.config.Types.Bool, "addr1", "v", "store1"), // store 1 byte
Goto("exit")),
Bloc("exit",
Exit("store2")))
package ssa
+import "cmd/compile/internal/types"
+
// decompose converts phi ops on compound builtin types into phi
// ops on simple types.
// (The remaining compound ops are decomposed with rewrite rules.)
t := name.Type
switch {
case t.IsInteger() && t.Size() > f.Config.RegSize:
- var elemType Type
+ var elemType *types.Type
if t.IsSigned() {
elemType = f.Config.Types.Int32
} else {
}
delete(f.NamedValues, name)
case t.IsComplex():
- var elemType Type
+ var elemType *types.Type
if t.Size() == 16 {
elemType = f.Config.Types.Float64
} else {
}
func decomposeInt64Phi(v *Value) {
- types := &v.Block.Func.Config.Types
- var partType Type
+ cfgtypes := &v.Block.Func.Config.Types
+ var partType *types.Type
if v.Type.IsSigned() {
- partType = types.Int32
+ partType = cfgtypes.Int32
} else {
- partType = types.UInt32
+ partType = cfgtypes.UInt32
}
hi := v.Block.NewValue0(v.Pos, OpPhi, partType)
- lo := v.Block.NewValue0(v.Pos, OpPhi, types.UInt32)
+ lo := v.Block.NewValue0(v.Pos, OpPhi, cfgtypes.UInt32)
for _, a := range v.Args {
hi.AddArg(a.Block.NewValue1(v.Pos, OpInt64Hi, partType, a))
- lo.AddArg(a.Block.NewValue1(v.Pos, OpInt64Lo, types.UInt32, a))
+ lo.AddArg(a.Block.NewValue1(v.Pos, OpInt64Lo, cfgtypes.UInt32, a))
}
v.reset(OpInt64Make)
v.AddArg(hi)
}
func decomposeComplexPhi(v *Value) {
- types := &v.Block.Func.Config.Types
- var partType Type
+ cfgtypes := &v.Block.Func.Config.Types
+ var partType *types.Type
switch z := v.Type.Size(); z {
case 8:
- partType = types.Float32
+ partType = cfgtypes.Float32
case 16:
- partType = types.Float64
+ partType = cfgtypes.Float64
default:
v.Fatalf("decomposeComplexPhi: bad complex size %d", z)
}
package ssa
-import "testing"
+import (
+ "cmd/compile/internal/types"
+ "testing"
+)
func BenchmarkDominatorsLinear(b *testing.B) { benchmarkDominators(b, 10000, genLinear) }
func BenchmarkDominatorsFwdBack(b *testing.B) { benchmarkDominators(b, 10000, genFwdBack) }
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto(blockn(0)),
),
)
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
switch i % 3 {
case 0:
blocs = append(blocs, Bloc(blockn(i),
- Valu("a", OpConstBool, TypeBool, 1, nil),
+ Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(i+1))))
case 1:
blocs = append(blocs, Bloc(blockn(i),
- Valu("a", OpConstBool, TypeBool, 1, nil),
+ Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", blockn(i+1), blockn(0))))
case 2:
blocs = append(blocs, Bloc(blockn(i),
- Valu("a", OpConstBool, TypeBool, 1, nil),
+ Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", blockn(i+1), blockn(size))))
}
}
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto(blockn(0)),
),
)
for i := 0; i < size; i++ {
blocs = append(blocs, Bloc(blockn(i),
- Valu("a", OpConstBool, TypeBool, 1, nil),
+ Valu("a", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", blockn(i+1), "exit")))
}
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Exit("mem")))
doms := map[string]string{}
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("a")),
Bloc("a",
Goto("b")),
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", "a", "c")),
Bloc("a",
If("p", "b", "c")),
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 0, nil),
If("p", "b3", "b5")),
Bloc("b2", Exit("mem")),
Bloc("b3", Goto("b2")),
Bloc("entry",
Goto("first")),
Bloc("first",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto("a")),
Bloc("a",
If("p", "b", "first")),
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", "a", "c")),
Bloc("a",
If("p", "b", "c")),
// note lack of an exit block
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto("a")),
Bloc("a",
Goto("b")),
cfg := testConfig(t)
fun := cfg.Fun("1",
Bloc("1",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
Goto("4")),
Bloc("2",
Goto("11")),
c := testConfig(t)
fun := c.Fun("b1",
Bloc("b1",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("p", OpConstBool, TypeBool, 1, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("p", OpConstBool, types.Types[types.TBOOL], 1, nil),
If("p", "b3", "b2")),
Bloc("b3",
If("p", "b5", "b6")),
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/s390x"
"cmd/internal/obj/x86"
"cmd/internal/src"
+ "fmt"
"testing"
)
}
type DummyAuto struct {
- t Type
+ t *types.Type
s string
}
-func (d *DummyAuto) Typ() Type {
+func (d *DummyAuto) Typ() *types.Type {
return d.t
}
func (DummyFrontend) StringData(s string) interface{} {
return nil
}
-func (DummyFrontend) Auto(pos src.XPos, t Type) GCNode {
+func (DummyFrontend) Auto(pos src.XPos, t *types.Type) GCNode {
return &DummyAuto{t: t, s: "aDummyAuto"}
}
func (d DummyFrontend) SplitString(s LocalSlot) (LocalSlot, LocalSlot) {
func (d DummyFrontend) Debug_checknil() bool { return false }
func (d DummyFrontend) Debug_wb() bool { return false }
-var dummyTypes = Types{
- Bool: TypeBool,
- Int8: TypeInt8,
- Int16: TypeInt16,
- Int32: TypeInt32,
- Int64: TypeInt64,
- UInt8: TypeUInt8,
- UInt16: TypeUInt16,
- UInt32: TypeUInt32,
- UInt64: TypeUInt64,
- Float32: TypeFloat32,
- Float64: TypeFloat64,
- Int: TypeInt64,
- Uintptr: TypeUInt64,
- String: nil,
- BytePtr: TypeBytePtr,
- Int32Ptr: TypeInt32.PtrTo(),
- UInt32Ptr: TypeUInt32.PtrTo(),
- IntPtr: TypeInt64.PtrTo(),
- UintptrPtr: TypeUInt64.PtrTo(),
- Float32Ptr: TypeFloat32.PtrTo(),
- Float64Ptr: TypeFloat64.PtrTo(),
- BytePtrPtr: TypeBytePtr.PtrTo(),
+var dummyTypes Types
+
+func init() {
+ // Initialize just enough of the universe and the types package to make our tests function.
+ // TODO(josharian): move universe initialization to the types package,
+ // so this test setup can share it.
+
+ types.Tconv = func(t *types.Type, flag, mode, depth int) string {
+ return t.Etype.String()
+ }
+ types.Sconv = func(s *types.Sym, flag, mode int) string {
+ return "sym"
+ }
+ types.FormatSym = func(sym *types.Sym, s fmt.State, verb rune, mode int) {
+ fmt.Fprintf(s, "sym")
+ }
+ types.FormatType = func(t *types.Type, s fmt.State, verb rune, mode int) {
+ fmt.Fprintf(s, "%v", t.Etype)
+ }
+ types.Dowidth = func(t *types.Type) {}
+
+ types.Tptr = types.TPTR64
+ for _, typ := range [...]struct {
+ width int64
+ et types.EType
+ }{
+ {1, types.TINT8},
+ {1, types.TUINT8},
+ {1, types.TBOOL},
+ {2, types.TINT16},
+ {2, types.TUINT16},
+ {4, types.TINT32},
+ {4, types.TUINT32},
+ {4, types.TFLOAT32},
+ {4, types.TFLOAT64},
+ {8, types.TUINT64},
+ {8, types.TINT64},
+ {8, types.TINT},
+ {8, types.TUINTPTR},
+ } {
+ t := types.New(typ.et)
+ t.Width = typ.width
+ t.Align = uint8(typ.width)
+ types.Types[typ.et] = t
+ }
+
+ dummyTypes = Types{
+ Bool: types.Types[types.TBOOL],
+ Int8: types.Types[types.TINT8],
+ Int16: types.Types[types.TINT16],
+ Int32: types.Types[types.TINT32],
+ Int64: types.Types[types.TINT64],
+ UInt8: types.Types[types.TUINT8],
+ UInt16: types.Types[types.TUINT16],
+ UInt32: types.Types[types.TUINT32],
+ UInt64: types.Types[types.TUINT64],
+ Float32: types.Types[types.TFLOAT32],
+ Float64: types.Types[types.TFLOAT64],
+ Int: types.Types[types.TINT],
+ Uintptr: types.Types[types.TUINTPTR],
+ String: types.Types[types.TSTRING],
+ BytePtr: types.NewPtr(types.Types[types.TUINT8]),
+ Int32Ptr: types.NewPtr(types.Types[types.TINT32]),
+ UInt32Ptr: types.NewPtr(types.Types[types.TUINT32]),
+ IntPtr: types.NewPtr(types.Types[types.TINT]),
+ UintptrPtr: types.NewPtr(types.Types[types.TUINTPTR]),
+ Float32Ptr: types.NewPtr(types.Types[types.TFLOAT32]),
+ Float64Ptr: types.NewPtr(types.Types[types.TFLOAT64]),
+ BytePtrPtr: types.NewPtr(types.NewPtr(types.Types[types.TUINT8])),
+ }
}
func (d DummyFrontend) DerefItab(sym *obj.LSym, off int64) *obj.LSym { return nil }
-func (d DummyFrontend) CanSSA(t Type) bool {
+func (d DummyFrontend) CanSSA(t *types.Type) bool {
// There are no un-SSAable types in dummy land.
return true
}
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/src"
"crypto/sha1"
"fmt"
// This package compiles each Func independently.
// Funcs are single-use; a new Func must be created for every compiled function.
type Func struct {
- Config *Config // architecture information
- Cache *Cache // re-usable cache
- fe Frontend // frontend state associated with this Func, callbacks into compiler frontend
- pass *pass // current pass information (name, options, etc.)
- Name string // e.g. bytes·Compare
- Type Type // type signature of the function.
- Blocks []*Block // unordered set of all basic blocks (note: not indexable by ID)
- Entry *Block // the entry basic block
- bid idAlloc // block ID allocator
- vid idAlloc // value ID allocator
+ Config *Config // architecture information
+ Cache *Cache // re-usable cache
+ fe Frontend // frontend state associated with this Func, callbacks into compiler frontend
+ pass *pass // current pass information (name, options, etc.)
+ Name string // e.g. bytes·Compare
+ Type *types.Type // type signature of the function.
+ Blocks []*Block // unordered set of all basic blocks (note: not indexable by ID)
+ Entry *Block // the entry basic block
+ bid idAlloc // block ID allocator
+ vid idAlloc // value ID allocator
// Given an environment variable used for debug hash match,
// what file (if any) receives the yes/no logging?
}
// newValue allocates a new Value with the given fields and places it at the end of b.Values.
-func (f *Func) newValue(op Op, t Type, b *Block, pos src.XPos) *Value {
+func (f *Func) newValue(op Op, t *types.Type, b *Block, pos src.XPos) *Value {
var v *Value
if f.freeValues != nil {
v = f.freeValues
// The returned value is not placed in any block. Once the caller
// decides on a block b, it must set b.Block and append
// the returned value to b.Values.
-func (f *Func) newValueNoBlock(op Op, t Type, pos src.XPos) *Value {
+func (f *Func) newValueNoBlock(op Op, t *types.Type, pos src.XPos) *Value {
var v *Value
if f.freeValues != nil {
v = f.freeValues
}
// NewValue0 returns a new value in the block with no arguments and zero aux values.
-func (b *Block) NewValue0(pos src.XPos, op Op, t Type) *Value {
+func (b *Block) NewValue0(pos src.XPos, op Op, t *types.Type) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Args = v.argstorage[:0]
}
// NewValue returns a new value in the block with no arguments and an auxint value.
-func (b *Block) NewValue0I(pos src.XPos, op Op, t Type, auxint int64) *Value {
+func (b *Block) NewValue0I(pos src.XPos, op Op, t *types.Type, auxint int64) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Args = v.argstorage[:0]
}
// NewValue returns a new value in the block with no arguments and an aux value.
-func (b *Block) NewValue0A(pos src.XPos, op Op, t Type, aux interface{}) *Value {
+func (b *Block) NewValue0A(pos src.XPos, op Op, t *types.Type, aux interface{}) *Value {
if _, ok := aux.(int64); ok {
// Disallow int64 aux values. They should be in the auxint field instead.
// Maybe we want to allow this at some point, but for now we disallow it
}
// NewValue returns a new value in the block with no arguments and both an auxint and aux values.
-func (b *Block) NewValue0IA(pos src.XPos, op Op, t Type, auxint int64, aux interface{}) *Value {
+func (b *Block) NewValue0IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux interface{}) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Aux = aux
}
// NewValue1 returns a new value in the block with one argument and zero aux values.
-func (b *Block) NewValue1(pos src.XPos, op Op, t Type, arg *Value) *Value {
+func (b *Block) NewValue1(pos src.XPos, op Op, t *types.Type, arg *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Args = v.argstorage[:1]
}
// NewValue1I returns a new value in the block with one argument and an auxint value.
-func (b *Block) NewValue1I(pos src.XPos, op Op, t Type, auxint int64, arg *Value) *Value {
+func (b *Block) NewValue1I(pos src.XPos, op Op, t *types.Type, auxint int64, arg *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Args = v.argstorage[:1]
}
// NewValue1A returns a new value in the block with one argument and an aux value.
-func (b *Block) NewValue1A(pos src.XPos, op Op, t Type, aux interface{}, arg *Value) *Value {
+func (b *Block) NewValue1A(pos src.XPos, op Op, t *types.Type, aux interface{}, arg *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Aux = aux
}
// NewValue1IA returns a new value in the block with one argument and both an auxint and aux values.
-func (b *Block) NewValue1IA(pos src.XPos, op Op, t Type, auxint int64, aux interface{}, arg *Value) *Value {
+func (b *Block) NewValue1IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux interface{}, arg *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Aux = aux
}
// NewValue2 returns a new value in the block with two arguments and zero aux values.
-func (b *Block) NewValue2(pos src.XPos, op Op, t Type, arg0, arg1 *Value) *Value {
+func (b *Block) NewValue2(pos src.XPos, op Op, t *types.Type, arg0, arg1 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Args = v.argstorage[:2]
}
// NewValue2I returns a new value in the block with two arguments and an auxint value.
-func (b *Block) NewValue2I(pos src.XPos, op Op, t Type, auxint int64, arg0, arg1 *Value) *Value {
+func (b *Block) NewValue2I(pos src.XPos, op Op, t *types.Type, auxint int64, arg0, arg1 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Args = v.argstorage[:2]
}
// NewValue3 returns a new value in the block with three arguments and zero aux values.
-func (b *Block) NewValue3(pos src.XPos, op Op, t Type, arg0, arg1, arg2 *Value) *Value {
+func (b *Block) NewValue3(pos src.XPos, op Op, t *types.Type, arg0, arg1, arg2 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Args = v.argstorage[:3]
}
// NewValue3I returns a new value in the block with three arguments and an auxint value.
-func (b *Block) NewValue3I(pos src.XPos, op Op, t Type, auxint int64, arg0, arg1, arg2 *Value) *Value {
+func (b *Block) NewValue3I(pos src.XPos, op Op, t *types.Type, auxint int64, arg0, arg1, arg2 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Args = v.argstorage[:3]
}
// NewValue3A returns a new value in the block with three argument and an aux value.
-func (b *Block) NewValue3A(pos src.XPos, op Op, t Type, aux interface{}, arg0, arg1, arg2 *Value) *Value {
+func (b *Block) NewValue3A(pos src.XPos, op Op, t *types.Type, aux interface{}, arg0, arg1, arg2 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Aux = aux
}
// NewValue4 returns a new value in the block with four arguments and zero aux values.
-func (b *Block) NewValue4(pos src.XPos, op Op, t Type, arg0, arg1, arg2, arg3 *Value) *Value {
+func (b *Block) NewValue4(pos src.XPos, op Op, t *types.Type, arg0, arg1, arg2, arg3 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Args = []*Value{arg0, arg1, arg2, arg3}
}
// constVal returns a constant value for c.
-func (f *Func) constVal(pos src.XPos, op Op, t Type, c int64, setAuxInt bool) *Value {
+func (f *Func) constVal(pos src.XPos, op Op, t *types.Type, c int64, setAuxInt bool) *Value {
if f.constants == nil {
f.constants = make(map[int64][]*Value)
}
vv := f.constants[c]
for _, v := range vv {
- if v.Op == op && v.Type.Compare(t) == CMPeq {
+ if v.Op == op && v.Type.Compare(t) == types.CMPeq {
if setAuxInt && v.AuxInt != c {
panic(fmt.Sprintf("cached const %s should have AuxInt of %d", v.LongString(), c))
}
)
// ConstInt returns an int constant representing its argument.
-func (f *Func) ConstBool(pos src.XPos, t Type, c bool) *Value {
+func (f *Func) ConstBool(pos src.XPos, t *types.Type, c bool) *Value {
i := int64(0)
if c {
i = 1
}
return f.constVal(pos, OpConstBool, t, i, true)
}
-func (f *Func) ConstInt8(pos src.XPos, t Type, c int8) *Value {
+func (f *Func) ConstInt8(pos src.XPos, t *types.Type, c int8) *Value {
return f.constVal(pos, OpConst8, t, int64(c), true)
}
-func (f *Func) ConstInt16(pos src.XPos, t Type, c int16) *Value {
+func (f *Func) ConstInt16(pos src.XPos, t *types.Type, c int16) *Value {
return f.constVal(pos, OpConst16, t, int64(c), true)
}
-func (f *Func) ConstInt32(pos src.XPos, t Type, c int32) *Value {
+func (f *Func) ConstInt32(pos src.XPos, t *types.Type, c int32) *Value {
return f.constVal(pos, OpConst32, t, int64(c), true)
}
-func (f *Func) ConstInt64(pos src.XPos, t Type, c int64) *Value {
+func (f *Func) ConstInt64(pos src.XPos, t *types.Type, c int64) *Value {
return f.constVal(pos, OpConst64, t, c, true)
}
-func (f *Func) ConstFloat32(pos src.XPos, t Type, c float64) *Value {
+func (f *Func) ConstFloat32(pos src.XPos, t *types.Type, c float64) *Value {
return f.constVal(pos, OpConst32F, t, int64(math.Float64bits(float64(float32(c)))), true)
}
-func (f *Func) ConstFloat64(pos src.XPos, t Type, c float64) *Value {
+func (f *Func) ConstFloat64(pos src.XPos, t *types.Type, c float64) *Value {
return f.constVal(pos, OpConst64F, t, int64(math.Float64bits(c)), true)
}
-func (f *Func) ConstSlice(pos src.XPos, t Type) *Value {
+func (f *Func) ConstSlice(pos src.XPos, t *types.Type) *Value {
return f.constVal(pos, OpConstSlice, t, constSliceMagic, false)
}
-func (f *Func) ConstInterface(pos src.XPos, t Type) *Value {
+func (f *Func) ConstInterface(pos src.XPos, t *types.Type) *Value {
return f.constVal(pos, OpConstInterface, t, constInterfaceMagic, false)
}
-func (f *Func) ConstNil(pos src.XPos, t Type) *Value {
+func (f *Func) ConstNil(pos src.XPos, t *types.Type) *Value {
return f.constVal(pos, OpConstNil, t, constNilMagic, false)
}
-func (f *Func) ConstEmptyString(pos src.XPos, t Type) *Value {
+func (f *Func) ConstEmptyString(pos src.XPos, t *types.Type) *Value {
v := f.constVal(pos, OpConstString, t, constEmptyStringMagic, false)
v.Aux = ""
return v
}
-func (f *Func) ConstOffPtrSP(pos src.XPos, t Type, c int64, sp *Value) *Value {
+func (f *Func) ConstOffPtrSP(pos src.XPos, t *types.Type, c int64, sp *Value) *Value {
v := f.constVal(pos, OpOffPtr, t, c, true)
if len(v.Args) == 0 {
v.AddArg(sp)
//
// fun := Fun("entry",
// Bloc("entry",
-// Valu("mem", OpInitMem, TypeMem, 0, nil),
+// Valu("mem", OpInitMem, types.TypeMem, 0, nil),
// Goto("exit")),
// Bloc("exit",
// Exit("mem")),
// Bloc("deadblock",
-// Valu("deadval", OpConstBool, TypeBool, 0, true),
+// Valu("deadval", OpConstBool, c.config.Types.Bool, 0, true),
// If("deadval", "deadblock", "exit")))
//
// and the Blocks or Values used in the Func can be accessed
// the parser can be used instead of Fun.
import (
+ "cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
"reflect"
}
// Valu defines a value in a block.
-func Valu(name string, op Op, t Type, auxint int64, aux interface{}, args ...string) valu {
+func Valu(name string, op Op, t *types.Type, auxint int64, aux interface{}, args ...string) valu {
return valu{name, op, t, auxint, aux, args}
}
type valu struct {
name string
op Op
- t Type
+ t *types.Type
auxint int64
aux interface{}
args []string
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, c.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, c.config.Types.Int64, 26, nil),
+ Valu("sum", OpAdd64, c.config.Types.Int64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem")))
{
cfg.Fun("entry",
Bloc("entry",
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
+ Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
+ Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
{
cfg.Fun("entry",
Bloc("entry",
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
+ Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
Bloc("exit",
Exit("mem")),
Bloc("entry",
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
+ Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit"))),
},
}
{
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Goto("exit")),
Bloc("exit",
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Exit("mem"))),
},
// value order changed
{
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
Exit("mem"))),
},
// value auxint different
{
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 14, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 26, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 26, nil),
Exit("mem"))),
},
// value aux different
{
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 0, 14),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 0, 14),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 0, 26),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 0, 26),
Exit("mem"))),
},
// value args different
{
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 14, nil),
- Valu("b", OpConst64, TypeInt64, 26, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "a", "b"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 26, nil),
+ Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "a", "b"),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpConst64, TypeInt64, 0, nil),
- Valu("b", OpConst64, TypeInt64, 14, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "b", "a"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpConst64, cfg.config.Types.Int64, 0, nil),
+ Valu("b", OpConst64, cfg.config.Types.Int64, 14, nil),
+ Valu("sum", OpAdd64, cfg.config.Types.Int64, 0, nil, "b", "a"),
Exit("mem"))),
},
}
c := testConfig(t)
f := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Exit("mem")))
- v1 := f.f.ConstBool(src.NoXPos, TypeBool, false)
- v2 := f.f.ConstBool(src.NoXPos, TypeBool, true)
+ v1 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, false)
+ v2 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, true)
f.f.freeValue(v1)
f.f.freeValue(v2)
- v3 := f.f.ConstBool(src.NoXPos, TypeBool, false)
- v4 := f.f.ConstBool(src.NoXPos, TypeBool, true)
+ v3 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, false)
+ v4 := f.f.ConstBool(src.NoXPos, c.config.Types.Bool, true)
if v3.AuxInt != 0 {
t.Errorf("expected %s to have auxint of 0\n", v3.LongString())
}
package ssa
import (
+ "cmd/compile/internal/types"
"fmt"
"strconv"
"testing"
)
func TestFuseEliminatesOneBranch(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
- Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "then", "exit")),
Bloc("then",
Goto("exit")),
}
func TestFuseEliminatesBothBranches(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
- Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "then", "else")),
Bloc("then",
Goto("exit")),
}
func TestFuseHandlesPhis(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
- Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "then", "else")),
Bloc("then",
Goto("exit")),
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("z0")),
Bloc("z1",
Goto("z2")),
blocks := make([]bloc, 0, 2*n+3)
blocks = append(blocks,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("cond", OpArg, TypeBool, 0, nil),
- Valu("x", OpArg, TypeInt64, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("cond", OpArg, c.config.Types.Bool, 0, nil),
+ Valu("x", OpArg, c.config.Types.Int64, 0, nil),
Goto("exit")))
phiArgs := make([]string, 0, 2*n)
}
blocks = append(blocks,
Bloc("merge",
- Valu("phi", OpPhi, TypeMem, 0, nil, phiArgs...),
+ Valu("phi", OpPhi, types.TypeMem, 0, nil, phiArgs...),
Goto("exit")),
Bloc("exit",
Exit("mem")))
(Neg32 x) -> (NEGL x)
(Neg16 x) -> (NEGL x)
(Neg8 x) -> (NEGL x)
-(Neg32F x) && !config.use387 -> (PXOR x (MOVSSconst <types.Float32> [f2i(math.Copysign(0, -1))]))
-(Neg64F x) && !config.use387 -> (PXOR x (MOVSDconst <types.Float64> [f2i(math.Copysign(0, -1))]))
+(Neg32F x) && !config.use387 -> (PXOR x (MOVSSconst <typ.Float32> [f2i(math.Copysign(0, -1))]))
+(Neg64F x) && !config.use387 -> (PXOR x (MOVSDconst <typ.Float64> [f2i(math.Copysign(0, -1))]))
(Neg32F x) && config.use387 -> (FCHS x)
(Neg64F x) && config.use387 -> (FCHS x)
// Lowering stores
// These more-specific FP versions of Store pattern should come first.
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 -> (MOVLstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 -> (MOVLstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Lowering moves
(Move [0] _ _ mem) -> mem
(Neg32 x) -> (NEGL x)
(Neg16 x) -> (NEGL x)
(Neg8 x) -> (NEGL x)
-(Neg32F x) -> (PXOR x (MOVSSconst <types.Float32> [f2i(math.Copysign(0, -1))]))
-(Neg64F x) -> (PXOR x (MOVSDconst <types.Float64> [f2i(math.Copysign(0, -1))]))
+(Neg32F x) -> (PXOR x (MOVSSconst <typ.Float32> [f2i(math.Copysign(0, -1))]))
+(Neg64F x) -> (PXOR x (MOVSDconst <typ.Float64> [f2i(math.Copysign(0, -1))]))
(Com64 x) -> (NOTQ x)
(Com32 x) -> (NOTL x)
(OffPtr [off] ptr) && config.PtrSize == 4 -> (ADDLconst [off] ptr)
// Lowering other arithmetic
-(Ctz64 <t> x) -> (CMOVQEQ (Select0 <t> (BSFQ x)) (MOVQconst <t> [64]) (Select1 <TypeFlags> (BSFQ x)))
-(Ctz32 x) -> (Select0 (BSFQ (ORQ <types.UInt64> (MOVQconst [1<<32]) x)))
+(Ctz64 <t> x) -> (CMOVQEQ (Select0 <t> (BSFQ x)) (MOVQconst <t> [64]) (Select1 <types.TypeFlags> (BSFQ x)))
+(Ctz32 x) -> (Select0 (BSFQ (ORQ <typ.UInt64> (MOVQconst [1<<32]) x)))
-(BitLen64 <t> x) -> (ADDQconst [1] (CMOVQEQ <t> (Select0 <t> (BSRQ x)) (MOVQconst <t> [-1]) (Select1 <TypeFlags> (BSRQ x))))
-(BitLen32 x) -> (BitLen64 (MOVLQZX <types.UInt64> x))
+(BitLen64 <t> x) -> (ADDQconst [1] (CMOVQEQ <t> (Select0 <t> (BSRQ x)) (MOVQconst <t> [-1]) (Select1 <types.TypeFlags> (BSRQ x))))
+(BitLen32 x) -> (BitLen64 (MOVLQZX <typ.UInt64> x))
(Bswap64 x) -> (BSWAPQ x)
(Bswap32 x) -> (BSWAPL x)
(PopCount64 x) -> (POPCNTQ x)
(PopCount32 x) -> (POPCNTL x)
-(PopCount16 x) -> (POPCNTL (MOVWQZX <types.UInt32> x))
-(PopCount8 x) -> (POPCNTL (MOVBQZX <types.UInt32> x))
+(PopCount16 x) -> (POPCNTL (MOVWQZX <typ.UInt32> x))
+(PopCount8 x) -> (POPCNTL (MOVBQZX <typ.UInt32> x))
(Sqrt x) -> (SQRTSD x)
// Lowering stores
// These more-specific FP versions of Store pattern should come first.
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVSDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVSSstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 -> (MOVQstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 -> (MOVLstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 -> (MOVQstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 -> (MOVLstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Lowering moves
(Move [0] _ _ mem) -> mem
// Atomic stores. We use XCHG to prevent the hardware reordering a subsequent load.
// TODO: most runtime uses of atomic stores don't need that property. Use normal stores for those?
-(AtomicStore32 ptr val mem) -> (Select1 (XCHGL <MakeTuple(types.UInt32,TypeMem)> val ptr mem))
-(AtomicStore64 ptr val mem) -> (Select1 (XCHGQ <MakeTuple(types.UInt64,TypeMem)> val ptr mem))
-(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 8 -> (Select1 (XCHGQ <MakeTuple(types.BytePtr,TypeMem)> val ptr mem))
-(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 4 -> (Select1 (XCHGL <MakeTuple(types.BytePtr,TypeMem)> val ptr mem))
+(AtomicStore32 ptr val mem) -> (Select1 (XCHGL <types.NewTuple(typ.UInt32,types.TypeMem)> val ptr mem))
+(AtomicStore64 ptr val mem) -> (Select1 (XCHGQ <types.NewTuple(typ.UInt64,types.TypeMem)> val ptr mem))
+(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 8 -> (Select1 (XCHGQ <types.NewTuple(typ.BytePtr,types.TypeMem)> val ptr mem))
+(AtomicStorePtrNoWB ptr val mem) && config.PtrSize == 4 -> (Select1 (XCHGL <types.NewTuple(typ.BytePtr,types.TypeMem)> val ptr mem))
// Atomic exchanges.
(AtomicExchange32 ptr val mem) -> (XCHGL val ptr mem)
(NE (TESTB (SETNEF cmp) (SETNEF cmp)) yes no) -> (NEF cmp yes no)
// Disabled because it interferes with the pattern match above and makes worse code.
-// (SETNEF x) -> (ORQ (SETNE <types.Int8> x) (SETNAN <types.Int8> x))
-// (SETEQF x) -> (ANDQ (SETEQ <types.Int8> x) (SETORD <types.Int8> x))
+// (SETNEF x) -> (ORQ (SETNE <typ.Int8> x) (SETNAN <typ.Int8> x))
+// (SETEQF x) -> (ANDQ (SETEQ <typ.Int8> x) (SETORD <typ.Int8> x))
// fold constants into instructions
(ADDQ x (MOVQconst [c])) && is32Bit(c) -> (ADDQconst [c] x)
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
- -> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ -> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
(ORQ
s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem))
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
- -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
(ORQ
s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem)))
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
- -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLload [i0] {s} p mem))) y)
+ -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
// Big-endian indexed loads
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
- -> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ -> @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
(ORQ
s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem))
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
- -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
(ORQ
s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem)))
&& clobber(s0)
&& clobber(s1)
&& clobber(or)
- -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ -> @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
// Combine 2 byte stores + shift into rolw 8 + word store
(MOVBstore [i] {s} p w
(Mul32uhilo x y) -> (MULLU x y)
(Div32 x y) ->
- (SUB (XOR <types.UInt32> // negate the result if one operand is negative
- (Select0 <types.UInt32> (CALLudiv
- (SUB <types.UInt32> (XOR x <types.UInt32> (Signmask x)) (Signmask x)) // negate x if negative
- (SUB <types.UInt32> (XOR y <types.UInt32> (Signmask y)) (Signmask y)))) // negate y if negative
- (Signmask (XOR <types.UInt32> x y))) (Signmask (XOR <types.UInt32> x y)))
-(Div32u x y) -> (Select0 <types.UInt32> (CALLudiv x y))
+ (SUB (XOR <typ.UInt32> // negate the result if one operand is negative
+ (Select0 <typ.UInt32> (CALLudiv
+ (SUB <typ.UInt32> (XOR x <typ.UInt32> (Signmask x)) (Signmask x)) // negate x if negative
+ (SUB <typ.UInt32> (XOR y <typ.UInt32> (Signmask y)) (Signmask y)))) // negate y if negative
+ (Signmask (XOR <typ.UInt32> x y))) (Signmask (XOR <typ.UInt32> x y)))
+(Div32u x y) -> (Select0 <typ.UInt32> (CALLudiv x y))
(Div16 x y) -> (Div32 (SignExt16to32 x) (SignExt16to32 y))
(Div16u x y) -> (Div32u (ZeroExt16to32 x) (ZeroExt16to32 y))
(Div8 x y) -> (Div32 (SignExt8to32 x) (SignExt8to32 y))
(Div64F x y) -> (DIVD x y)
(Mod32 x y) ->
- (SUB (XOR <types.UInt32> // negate the result if x is negative
- (Select1 <types.UInt32> (CALLudiv
- (SUB <types.UInt32> (XOR <types.UInt32> x (Signmask x)) (Signmask x)) // negate x if negative
- (SUB <types.UInt32> (XOR <types.UInt32> y (Signmask y)) (Signmask y)))) // negate y if negative
+ (SUB (XOR <typ.UInt32> // negate the result if x is negative
+ (Select1 <typ.UInt32> (CALLudiv
+ (SUB <typ.UInt32> (XOR <typ.UInt32> x (Signmask x)) (Signmask x)) // negate x if negative
+ (SUB <typ.UInt32> (XOR <typ.UInt32> y (Signmask y)) (Signmask y)))) // negate y if negative
(Signmask x)) (Signmask x))
-(Mod32u x y) -> (Select1 <types.UInt32> (CALLudiv x y))
+(Mod32u x y) -> (Select1 <typ.UInt32> (CALLudiv x y))
(Mod16 x y) -> (Mod32 (SignExt16to32 x) (SignExt16to32 y))
(Mod16u x y) -> (Mod32u (ZeroExt16to32 x) (ZeroExt16to32 y))
(Mod8 x y) -> (Mod32 (SignExt8to32 x) (SignExt8to32 y))
// boolean ops -- booleans are represented with 0=false, 1=true
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
-(EqB x y) -> (XORconst [1] (XOR <types.Bool> x y))
+(EqB x y) -> (XORconst [1] (XOR <typ.Bool> x y))
(NeqB x y) -> (XOR x y)
(Not x) -> (XORconst [1] x)
(Rsh32x64 x (Const64 [c])) && uint64(c) < 32 -> (SRAconst x [c])
(Rsh32Ux64 x (Const64 [c])) && uint64(c) < 32 -> (SRLconst x [c])
(Lsh16x64 x (Const64 [c])) && uint64(c) < 16 -> (SLLconst x [c])
-(Rsh16x64 x (Const64 [c])) && uint64(c) < 16 -> (SRAconst (SLLconst <types.UInt32> x [16]) [c+16])
-(Rsh16Ux64 x (Const64 [c])) && uint64(c) < 16 -> (SRLconst (SLLconst <types.UInt32> x [16]) [c+16])
+(Rsh16x64 x (Const64 [c])) && uint64(c) < 16 -> (SRAconst (SLLconst <typ.UInt32> x [16]) [c+16])
+(Rsh16Ux64 x (Const64 [c])) && uint64(c) < 16 -> (SRLconst (SLLconst <typ.UInt32> x [16]) [c+16])
(Lsh8x64 x (Const64 [c])) && uint64(c) < 8 -> (SLLconst x [c])
-(Rsh8x64 x (Const64 [c])) && uint64(c) < 8 -> (SRAconst (SLLconst <types.UInt32> x [24]) [c+24])
-(Rsh8Ux64 x (Const64 [c])) && uint64(c) < 8 -> (SRLconst (SLLconst <types.UInt32> x [24]) [c+24])
+(Rsh8x64 x (Const64 [c])) && uint64(c) < 8 -> (SRAconst (SLLconst <typ.UInt32> x [24]) [c+24])
+(Rsh8Ux64 x (Const64 [c])) && uint64(c) < 8 -> (SRLconst (SLLconst <typ.UInt32> x [24]) [c+24])
// large constant shifts
(Lsh32x64 _ (Const64 [c])) && uint64(c) >= 32 -> (Const32 [0])
// large constant signed right shift, we leave the sign bit
(Rsh32x64 x (Const64 [c])) && uint64(c) >= 32 -> (SRAconst x [31])
-(Rsh16x64 x (Const64 [c])) && uint64(c) >= 16 -> (SRAconst (SLLconst <types.UInt32> x [16]) [31])
-(Rsh8x64 x (Const64 [c])) && uint64(c) >= 8 -> (SRAconst (SLLconst <types.UInt32> x [24]) [31])
+(Rsh16x64 x (Const64 [c])) && uint64(c) >= 16 -> (SRAconst (SLLconst <typ.UInt32> x [16]) [31])
+(Rsh8x64 x (Const64 [c])) && uint64(c) >= 8 -> (SRAconst (SLLconst <typ.UInt32> x [24]) [31])
// constants
(Const8 [val]) -> (MOVWconst [val])
(SignExt16to32 x) -> (MOVHreg x)
(Signmask x) -> (SRAconst x [31])
-(Zeromask x) -> (SRAconst (RSBshiftRL <types.Int32> x x [1]) [31]) // sign bit of uint32(x)>>1 - x
+(Zeromask x) -> (SRAconst (RSBshiftRL <typ.Int32> x x [1]) [31]) // sign bit of uint32(x)>>1 - x
(Slicemask <t> x) -> (SRAconst (RSBconst <t> [0] x) [31])
// float <-> int conversion
(Load <t> ptr mem) && is64BitFloat(t) -> (MOVDload ptr mem)
// stores
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
// zero instructions
(Zero [0] _ mem) -> mem
(Zero [1] ptr mem) -> (MOVBstore ptr (MOVWconst [0]) mem)
-(Zero [2] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [2] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore ptr (MOVWconst [0]) mem)
(Zero [2] ptr mem) ->
(MOVBstore [1] ptr (MOVWconst [0])
(MOVBstore [0] ptr (MOVWconst [0]) mem))
-(Zero [4] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore ptr (MOVWconst [0]) mem)
-(Zero [4] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] ptr (MOVWconst [0])
(MOVHstore [0] ptr (MOVWconst [0]) mem))
(Zero [4] ptr mem) ->
// 4 and 128 are magic constants, see runtime/mkduff.go
(Zero [s] {t} ptr mem)
&& s%4 == 0 && s > 4 && s <= 512
- && t.(Type).Alignment()%4 == 0 && !config.noDuffDevice ->
+ && t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice ->
(DUFFZERO [4 * (128 - int64(s/4))] ptr (MOVWconst [0]) mem)
// Large zeroing uses a loop
(Zero [s] {t} ptr mem)
- && (s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0 ->
- (LoweredZero [t.(Type).Alignment()]
+ && (s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0 ->
+ (LoweredZero [t.(*types.Type).Alignment()]
ptr
- (ADDconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)])
+ (ADDconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)])
(MOVWconst [0])
mem)
// moves
(Move [0] _ _ mem) -> mem
(Move [1] dst src mem) -> (MOVBstore dst (MOVBUload src mem) mem)
-(Move [2] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [2] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore dst (MOVHUload src mem) mem)
(Move [2] dst src mem) ->
(MOVBstore [1] dst (MOVBUload [1] src mem)
(MOVBstore dst (MOVBUload src mem) mem))
-(Move [4] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore dst (MOVWload src mem) mem)
-(Move [4] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] dst (MOVHUload [2] src mem)
(MOVHstore dst (MOVHUload src mem) mem))
(Move [4] dst src mem) ->
// 8 and 128 are magic constants, see runtime/mkduff.go
(Move [s] {t} dst src mem)
&& s%4 == 0 && s > 4 && s <= 512
- && t.(Type).Alignment()%4 == 0 && !config.noDuffDevice ->
+ && t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice ->
(DUFFCOPY [8 * (128 - int64(s/4))] dst src mem)
// Large move uses a loop
(Move [s] {t} dst src mem)
- && (s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0 ->
- (LoweredMove [t.(Type).Alignment()]
+ && (s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0 ->
+ (LoweredMove [t.(*types.Type).Alignment()]
dst
src
- (ADDconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)])
+ (ADDconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)])
mem)
// calls
(Hmul64 x y) -> (MULH x y)
(Hmul64u x y) -> (UMULH x y)
-(Hmul32 x y) -> (SRAconst (MULL <types.Int64> x y) [32])
-(Hmul32u x y) -> (SRAconst (UMULL <types.UInt64> x y) [32])
+(Hmul32 x y) -> (SRAconst (MULL <typ.Int64> x y) [32])
+(Hmul32u x y) -> (SRAconst (UMULL <typ.UInt64> x y) [32])
(Div64 x y) -> (DIV x y)
(Div64u x y) -> (UDIV x y)
(Ctz64 <t> x) -> (CLZ (RBIT <t> x))
(Ctz32 <t> x) -> (CLZW (RBITW <t> x))
-(BitLen64 x) -> (SUB (MOVDconst [64]) (CLZ <types.Int> x))
+(BitLen64 x) -> (SUB (MOVDconst [64]) (CLZ <typ.Int> x))
(Bswap64 x) -> (REV x)
(Bswap32 x) -> (REVW x)
(BitRev64 x) -> (RBIT x)
(BitRev32 x) -> (RBITW x)
-(BitRev16 x) -> (SRLconst [48] (RBIT <types.UInt64> x))
-(BitRev8 x) -> (SRLconst [56] (RBIT <types.UInt64> x))
+(BitRev16 x) -> (SRLconst [48] (RBIT <typ.UInt64> x))
+(BitRev8 x) -> (SRLconst [56] (RBIT <typ.UInt64> x))
// boolean ops -- booleans are represented with 0=false, 1=true
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
-(EqB x y) -> (XOR (MOVDconst [1]) (XOR <types.Bool> x y))
+(EqB x y) -> (XOR (MOVDconst [1]) (XOR <typ.Bool> x y))
(NeqB x y) -> (XOR x y)
(Not x) -> (XOR (MOVDconst [1]) x)
(Load <t> ptr mem) && is64BitFloat(t) -> (FMOVDload ptr mem)
// stores
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && !is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
// zeroing
(Zero [0] _ mem) -> mem
(Add64F x y) -> (ADDD x y)
(Select0 (Add32carry <t> x y)) -> (ADD <t.FieldType(0)> x y)
-(Select1 (Add32carry <t> x y)) -> (SGTU <types.Bool> x (ADD <t.FieldType(0)> x y))
+(Select1 (Add32carry <t> x y)) -> (SGTU <typ.Bool> x (ADD <t.FieldType(0)> x y))
(Add32withcarry <t> x y c) -> (ADD c (ADD <t> x y))
(SubPtr x y) -> (SUB x y)
(Sub64F x y) -> (SUBD x y)
(Select0 (Sub32carry <t> x y)) -> (SUB <t.FieldType(0)> x y)
-(Select1 (Sub32carry <t> x y)) -> (SGTU <types.Bool> (SUB <t.FieldType(0)> x y) x)
+(Select1 (Sub32carry <t> x y)) -> (SGTU <typ.Bool> (SUB <t.FieldType(0)> x y) x)
(Sub32withcarry <t> x y c) -> (SUB (SUB <t> x y) c)
(Mul32 x y) -> (MUL x y)
(Rsh32x64 x (Const64 [c])) && uint32(c) < 32 -> (SRAconst x [c])
(Rsh32Ux64 x (Const64 [c])) && uint32(c) < 32 -> (SRLconst x [c])
(Lsh16x64 x (Const64 [c])) && uint32(c) < 16 -> (SLLconst x [c])
-(Rsh16x64 x (Const64 [c])) && uint32(c) < 16 -> (SRAconst (SLLconst <types.UInt32> x [16]) [c+16])
-(Rsh16Ux64 x (Const64 [c])) && uint32(c) < 16 -> (SRLconst (SLLconst <types.UInt32> x [16]) [c+16])
+(Rsh16x64 x (Const64 [c])) && uint32(c) < 16 -> (SRAconst (SLLconst <typ.UInt32> x [16]) [c+16])
+(Rsh16Ux64 x (Const64 [c])) && uint32(c) < 16 -> (SRLconst (SLLconst <typ.UInt32> x [16]) [c+16])
(Lsh8x64 x (Const64 [c])) && uint32(c) < 8 -> (SLLconst x [c])
-(Rsh8x64 x (Const64 [c])) && uint32(c) < 8 -> (SRAconst (SLLconst <types.UInt32> x [24]) [c+24])
-(Rsh8Ux64 x (Const64 [c])) && uint32(c) < 8 -> (SRLconst (SLLconst <types.UInt32> x [24]) [c+24])
+(Rsh8x64 x (Const64 [c])) && uint32(c) < 8 -> (SRAconst (SLLconst <typ.UInt32> x [24]) [c+24])
+(Rsh8Ux64 x (Const64 [c])) && uint32(c) < 8 -> (SRLconst (SLLconst <typ.UInt32> x [24]) [c+24])
// large constant shifts
(Lsh32x64 _ (Const64 [c])) && uint32(c) >= 32 -> (MOVWconst [0])
// large constant signed right shift, we leave the sign bit
(Rsh32x64 x (Const64 [c])) && uint32(c) >= 32 -> (SRAconst x [31])
-(Rsh16x64 x (Const64 [c])) && uint32(c) >= 16 -> (SRAconst (SLLconst <types.UInt32> x [16]) [31])
-(Rsh8x64 x (Const64 [c])) && uint32(c) >= 8 -> (SRAconst (SLLconst <types.UInt32> x [24]) [31])
+(Rsh16x64 x (Const64 [c])) && uint32(c) >= 16 -> (SRAconst (SLLconst <typ.UInt32> x [16]) [31])
+(Rsh8x64 x (Const64 [c])) && uint32(c) >= 8 -> (SRAconst (SLLconst <typ.UInt32> x [24]) [31])
// shifts
// hardware instruction uses only the low 5 bits of the shift
(Rsh8Ux16 <t> x y) -> (CMOVZ (SRL <t> (ZeroExt8to32 x) (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
(Rsh8Ux8 <t> x y) -> (CMOVZ (SRL <t> (ZeroExt8to32 x) (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
-(Rsh32x32 x y) -> (SRA x ( CMOVZ <types.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
-(Rsh32x16 x y) -> (SRA x ( CMOVZ <types.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
-(Rsh32x8 x y) -> (SRA x ( CMOVZ <types.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
+(Rsh32x32 x y) -> (SRA x ( CMOVZ <typ.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
+(Rsh32x16 x y) -> (SRA x ( CMOVZ <typ.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
+(Rsh32x8 x y) -> (SRA x ( CMOVZ <typ.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
-(Rsh16x32 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
-(Rsh16x16 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
-(Rsh16x8 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
+(Rsh16x32 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
+(Rsh16x16 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
+(Rsh16x8 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
-(Rsh8x32 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
-(Rsh8x16 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
-(Rsh8x8 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
+(Rsh8x32 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
+(Rsh8x16 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
+(Rsh8x8 x y) -> (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
// unary ops
(Neg32 x) -> (NEG x)
// boolean ops -- booleans are represented with 0=false, 1=true
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
-(EqB x y) -> (XORconst [1] (XOR <types.Bool> x y))
+(EqB x y) -> (XORconst [1] (XOR <typ.Bool> x y))
(NeqB x y) -> (XOR x y)
(Not x) -> (XORconst [1] x)
(Load <t> ptr mem) && is64BitFloat(t) -> (MOVDload ptr mem)
// stores
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
// zero instructions
(Zero [0] _ mem) -> mem
(Zero [1] ptr mem) -> (MOVBstore ptr (MOVWconst [0]) mem)
-(Zero [2] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [2] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore ptr (MOVWconst [0]) mem)
(Zero [2] ptr mem) ->
(MOVBstore [1] ptr (MOVWconst [0])
(MOVBstore [0] ptr (MOVWconst [0]) mem))
-(Zero [4] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore ptr (MOVWconst [0]) mem)
-(Zero [4] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] ptr (MOVWconst [0])
(MOVHstore [0] ptr (MOVWconst [0]) mem))
(Zero [4] ptr mem) ->
(MOVBstore [2] ptr (MOVWconst [0])
(MOVBstore [1] ptr (MOVWconst [0])
(MOVBstore [0] ptr (MOVWconst [0]) mem)))
-(Zero [6] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [6] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [4] ptr (MOVWconst [0])
(MOVHstore [2] ptr (MOVWconst [0])
(MOVHstore [0] ptr (MOVWconst [0]) mem)))
-(Zero [8] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [8] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [4] ptr (MOVWconst [0])
(MOVWstore [0] ptr (MOVWconst [0]) mem))
-(Zero [12] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [12] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [8] ptr (MOVWconst [0])
(MOVWstore [4] ptr (MOVWconst [0])
(MOVWstore [0] ptr (MOVWconst [0]) mem)))
-(Zero [16] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [16] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [12] ptr (MOVWconst [0])
(MOVWstore [8] ptr (MOVWconst [0])
(MOVWstore [4] ptr (MOVWconst [0])
// large or unaligned zeroing uses a loop
(Zero [s] {t} ptr mem)
- && (s > 16 || t.(Type).Alignment()%4 != 0) ->
- (LoweredZero [t.(Type).Alignment()]
+ && (s > 16 || t.(*types.Type).Alignment()%4 != 0) ->
+ (LoweredZero [t.(*types.Type).Alignment()]
ptr
- (ADDconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)])
+ (ADDconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)])
mem)
// moves
(Move [0] _ _ mem) -> mem
(Move [1] dst src mem) -> (MOVBstore dst (MOVBUload src mem) mem)
-(Move [2] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [2] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore dst (MOVHUload src mem) mem)
(Move [2] dst src mem) ->
(MOVBstore [1] dst (MOVBUload [1] src mem)
(MOVBstore dst (MOVBUload src mem) mem))
-(Move [4] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore dst (MOVWload src mem) mem)
-(Move [4] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] dst (MOVHUload [2] src mem)
(MOVHstore dst (MOVHUload src mem) mem))
(Move [4] dst src mem) ->
(MOVBstore [2] dst (MOVBUload [2] src mem)
(MOVBstore [1] dst (MOVBUload [1] src mem)
(MOVBstore dst (MOVBUload src mem) mem)))
-(Move [8] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [8] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [4] dst (MOVWload [4] src mem)
(MOVWstore dst (MOVWload src mem) mem))
-(Move [8] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [8] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [6] dst (MOVHload [6] src mem)
(MOVHstore [4] dst (MOVHload [4] src mem)
(MOVHstore [2] dst (MOVHload [2] src mem)
(MOVHstore dst (MOVHload src mem) mem))))
-(Move [6] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [6] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [4] dst (MOVHload [4] src mem)
(MOVHstore [2] dst (MOVHload [2] src mem)
(MOVHstore dst (MOVHload src mem) mem)))
-(Move [12] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [12] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [8] dst (MOVWload [8] src mem)
(MOVWstore [4] dst (MOVWload [4] src mem)
(MOVWstore dst (MOVWload src mem) mem)))
-(Move [16] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [16] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [12] dst (MOVWload [12] src mem)
(MOVWstore [8] dst (MOVWload [8] src mem)
(MOVWstore [4] dst (MOVWload [4] src mem)
// large or unaligned move uses a loop
(Move [s] {t} dst src mem)
- && (s > 16 || t.(Type).Alignment()%4 != 0) ->
- (LoweredMove [t.(Type).Alignment()]
+ && (s > 16 || t.(*types.Type).Alignment()%4 != 0) ->
+ (LoweredMove [t.(*types.Type).Alignment()]
dst
src
- (ADDconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)])
+ (ADDconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)])
mem)
// calls
// AtomicOr8(ptr,val) -> LoweredAtomicOr(ptr&^3,uint32(val) << ((ptr & 3) * 8))
(AtomicOr8 ptr val mem) && !config.BigEndian ->
- (LoweredAtomicOr (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr)
- (SLL <types.UInt32> (ZeroExt8to32 val)
- (SLLconst <types.UInt32> [3]
- (ANDconst <types.UInt32> [3] ptr))) mem)
+ (LoweredAtomicOr (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr)
+ (SLL <typ.UInt32> (ZeroExt8to32 val)
+ (SLLconst <typ.UInt32> [3]
+ (ANDconst <typ.UInt32> [3] ptr))) mem)
// AtomicAnd8(ptr,val) -> LoweredAtomicAnd(ptr&^3,(uint32(val) << ((ptr & 3) * 8)) | ^(uint32(0xFF) << ((ptr & 3) * 8))))
(AtomicAnd8 ptr val mem) && !config.BigEndian ->
- (LoweredAtomicAnd (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr)
- (OR <types.UInt32> (SLL <types.UInt32> (ZeroExt8to32 val)
- (SLLconst <types.UInt32> [3]
- (ANDconst <types.UInt32> [3] ptr)))
- (NORconst [0] <types.UInt32> (SLL <types.UInt32>
- (MOVWconst [0xff]) (SLLconst <types.UInt32> [3]
- (ANDconst <types.UInt32> [3] ptr))))) mem)
+ (LoweredAtomicAnd (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr)
+ (OR <typ.UInt32> (SLL <typ.UInt32> (ZeroExt8to32 val)
+ (SLLconst <typ.UInt32> [3]
+ (ANDconst <typ.UInt32> [3] ptr)))
+ (NORconst [0] <typ.UInt32> (SLL <typ.UInt32>
+ (MOVWconst [0xff]) (SLLconst <typ.UInt32> [3]
+ (ANDconst <typ.UInt32> [3] ptr))))) mem)
// AtomicOr8(ptr,val) -> LoweredAtomicOr(ptr&^3,uint32(val) << (((ptr^3) & 3) * 8))
(AtomicOr8 ptr val mem) && config.BigEndian ->
- (LoweredAtomicOr (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr)
- (SLL <types.UInt32> (ZeroExt8to32 val)
- (SLLconst <types.UInt32> [3]
- (ANDconst <types.UInt32> [3]
- (XORconst <types.UInt32> [3] ptr)))) mem)
+ (LoweredAtomicOr (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr)
+ (SLL <typ.UInt32> (ZeroExt8to32 val)
+ (SLLconst <typ.UInt32> [3]
+ (ANDconst <typ.UInt32> [3]
+ (XORconst <typ.UInt32> [3] ptr)))) mem)
// AtomicAnd8(ptr,val) -> LoweredAtomicAnd(ptr&^3,(uint32(val) << (((ptr^3) & 3) * 8)) | ^(uint32(0xFF) << (((ptr^3) & 3) * 8))))
(AtomicAnd8 ptr val mem) && config.BigEndian ->
- (LoweredAtomicAnd (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr)
- (OR <types.UInt32> (SLL <types.UInt32> (ZeroExt8to32 val)
- (SLLconst <types.UInt32> [3]
- (ANDconst <types.UInt32> [3]
- (XORconst <types.UInt32> [3] ptr))))
- (NORconst [0] <types.UInt32> (SLL <types.UInt32>
- (MOVWconst [0xff]) (SLLconst <types.UInt32> [3]
- (ANDconst <types.UInt32> [3]
- (XORconst <types.UInt32> [3] ptr)))))) mem)
+ (LoweredAtomicAnd (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr)
+ (OR <typ.UInt32> (SLL <typ.UInt32> (ZeroExt8to32 val)
+ (SLLconst <typ.UInt32> [3]
+ (ANDconst <typ.UInt32> [3]
+ (XORconst <typ.UInt32> [3] ptr))))
+ (NORconst [0] <typ.UInt32> (SLL <typ.UInt32>
+ (MOVWconst [0xff]) (SLLconst <typ.UInt32> [3]
+ (ANDconst <typ.UInt32> [3]
+ (XORconst <typ.UInt32> [3] ptr)))))) mem)
// checks
(Hmul64 x y) -> (Select0 (MULV x y))
(Hmul64u x y) -> (Select0 (MULVU x y))
-(Hmul32 x y) -> (SRAVconst (Select1 <types.Int64> (MULV (SignExt32to64 x) (SignExt32to64 y))) [32])
-(Hmul32u x y) -> (SRLVconst (Select1 <types.UInt64> (MULVU (ZeroExt32to64 x) (ZeroExt32to64 y))) [32])
+(Hmul32 x y) -> (SRAVconst (Select1 <typ.Int64> (MULV (SignExt32to64 x) (SignExt32to64 y))) [32])
+(Hmul32u x y) -> (SRLVconst (Select1 <typ.UInt64> (MULVU (ZeroExt32to64 x) (ZeroExt32to64 y))) [32])
(Div64 x y) -> (Select1 (DIVV x y))
(Div64u x y) -> (Select1 (DIVVU x y))
// shifts
// hardware instruction uses only the low 6 bits of the shift
// we compare to 64 to ensure Go semantics for large shifts
-(Lsh64x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
-(Lsh64x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
-(Lsh64x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
-(Lsh64x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
-
-(Lsh32x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
-(Lsh32x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
-(Lsh32x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
-(Lsh32x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
-
-(Lsh16x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
-(Lsh16x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
-(Lsh16x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
-(Lsh16x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
-
-(Lsh8x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
-(Lsh8x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
-(Lsh8x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
-(Lsh8x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
-
-(Rsh64Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> x y))
-(Rsh64Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> x (ZeroExt32to64 y)))
-(Rsh64Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> x (ZeroExt16to64 y)))
-(Rsh64Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> x (ZeroExt8to64 y)))
-
-(Rsh32Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> (ZeroExt32to64 x) y))
-(Rsh32Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt32to64 y)))
-(Rsh32Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt16to64 y)))
-(Rsh32Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt8to64 y)))
-
-(Rsh16Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> (ZeroExt16to64 x) y))
-(Rsh16Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt32to64 y)))
-(Rsh16Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt16to64 y)))
-(Rsh16Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt8to64 y)))
-
-(Rsh8Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> (ZeroExt8to64 x) y))
-(Rsh8Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt32to64 y)))
-(Rsh8Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt16to64 y)))
-(Rsh8Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt8to64 y)))
-
-(Rsh64x64 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
-(Rsh64x32 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
-(Rsh64x16 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
-(Rsh64x8 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
-
-(Rsh32x64 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
-(Rsh32x32 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
-(Rsh32x16 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
-(Rsh32x8 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
-
-(Rsh16x64 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
-(Rsh16x32 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
-(Rsh16x16 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
-(Rsh16x8 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
-
-(Rsh8x64 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
-(Rsh8x32 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
-(Rsh8x16 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
-(Rsh8x8 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
+(Lsh64x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
+(Lsh64x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+(Lsh64x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+(Lsh64x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+
+(Lsh32x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
+(Lsh32x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+(Lsh32x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+(Lsh32x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+
+(Lsh16x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
+(Lsh16x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+(Lsh16x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+(Lsh16x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+
+(Lsh8x64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
+(Lsh8x32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+(Lsh8x16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+(Lsh8x8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+
+(Rsh64Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> x y))
+(Rsh64Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> x (ZeroExt32to64 y)))
+(Rsh64Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> x (ZeroExt16to64 y)))
+(Rsh64Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> x (ZeroExt8to64 y)))
+
+(Rsh32Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> (ZeroExt32to64 x) y))
+(Rsh32Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt32to64 y)))
+(Rsh32Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt16to64 y)))
+(Rsh32Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt8to64 y)))
+
+(Rsh16Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> (ZeroExt16to64 x) y))
+(Rsh16Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt32to64 y)))
+(Rsh16Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt16to64 y)))
+(Rsh16Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt8to64 y)))
+
+(Rsh8Ux64 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> (ZeroExt8to64 x) y))
+(Rsh8Ux32 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt32to64 y)))
+(Rsh8Ux16 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt16to64 y)))
+(Rsh8Ux8 <t> x y) -> (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt8to64 y)))
+
+(Rsh64x64 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
+(Rsh64x32 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
+(Rsh64x16 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
+(Rsh64x8 <t> x y) -> (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
+
+(Rsh32x64 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
+(Rsh32x32 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
+(Rsh32x16 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
+(Rsh32x8 <t> x y) -> (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
+
+(Rsh16x64 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
+(Rsh16x32 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
+(Rsh16x16 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
+(Rsh16x8 <t> x y) -> (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
+
+(Rsh8x64 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
+(Rsh8x32 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
+(Rsh8x16 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
+(Rsh8x8 <t> x y) -> (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
// unary ops
(Neg64 x) -> (NEGV x)
// boolean ops -- booleans are represented with 0=false, 1=true
(AndB x y) -> (AND x y)
(OrB x y) -> (OR x y)
-(EqB x y) -> (XOR (MOVVconst [1]) (XOR <types.Bool> x y))
+(EqB x y) -> (XOR (MOVVconst [1]) (XOR <typ.Bool> x y))
(NeqB x y) -> (XOR x y)
(Not x) -> (XORconst [1] x)
(Load <t> ptr mem) && is64BitFloat(t) -> (MOVDload ptr mem)
// stores
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && !is64BitFloat(val.Type) -> (MOVVstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type) -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type) -> (MOVVstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (MOVFstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (MOVDstore ptr val mem)
// zeroing
(Zero [0] _ mem) -> mem
(Zero [1] ptr mem) -> (MOVBstore ptr (MOVVconst [0]) mem)
-(Zero [2] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [2] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore ptr (MOVVconst [0]) mem)
(Zero [2] ptr mem) ->
(MOVBstore [1] ptr (MOVVconst [0])
(MOVBstore [0] ptr (MOVVconst [0]) mem))
-(Zero [4] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore ptr (MOVVconst [0]) mem)
-(Zero [4] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [4] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] ptr (MOVVconst [0])
(MOVHstore [0] ptr (MOVVconst [0]) mem))
(Zero [4] ptr mem) ->
(MOVBstore [2] ptr (MOVVconst [0])
(MOVBstore [1] ptr (MOVVconst [0])
(MOVBstore [0] ptr (MOVVconst [0]) mem))))
-(Zero [8] {t} ptr mem) && t.(Type).Alignment()%8 == 0 ->
+(Zero [8] {t} ptr mem) && t.(*types.Type).Alignment()%8 == 0 ->
(MOVVstore ptr (MOVVconst [0]) mem)
-(Zero [8] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [8] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [4] ptr (MOVVconst [0])
(MOVWstore [0] ptr (MOVVconst [0]) mem))
-(Zero [8] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [8] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [6] ptr (MOVVconst [0])
(MOVHstore [4] ptr (MOVVconst [0])
(MOVHstore [2] ptr (MOVVconst [0])
(MOVBstore [2] ptr (MOVVconst [0])
(MOVBstore [1] ptr (MOVVconst [0])
(MOVBstore [0] ptr (MOVVconst [0]) mem)))
-(Zero [6] {t} ptr mem) && t.(Type).Alignment()%2 == 0 ->
+(Zero [6] {t} ptr mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [4] ptr (MOVVconst [0])
(MOVHstore [2] ptr (MOVVconst [0])
(MOVHstore [0] ptr (MOVVconst [0]) mem)))
-(Zero [12] {t} ptr mem) && t.(Type).Alignment()%4 == 0 ->
+(Zero [12] {t} ptr mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [8] ptr (MOVVconst [0])
(MOVWstore [4] ptr (MOVVconst [0])
(MOVWstore [0] ptr (MOVVconst [0]) mem)))
-(Zero [16] {t} ptr mem) && t.(Type).Alignment()%8 == 0 ->
+(Zero [16] {t} ptr mem) && t.(*types.Type).Alignment()%8 == 0 ->
(MOVVstore [8] ptr (MOVVconst [0])
(MOVVstore [0] ptr (MOVVconst [0]) mem))
-(Zero [24] {t} ptr mem) && t.(Type).Alignment()%8 == 0 ->
+(Zero [24] {t} ptr mem) && t.(*types.Type).Alignment()%8 == 0 ->
(MOVVstore [16] ptr (MOVVconst [0])
(MOVVstore [8] ptr (MOVVconst [0])
(MOVVstore [0] ptr (MOVVconst [0]) mem)))
// 8, and 128 are magic constants, see runtime/mkduff.go
(Zero [s] {t} ptr mem)
&& s%8 == 0 && s > 24 && s <= 8*128
- && t.(Type).Alignment()%8 == 0 && !config.noDuffDevice ->
+ && t.(*types.Type).Alignment()%8 == 0 && !config.noDuffDevice ->
(DUFFZERO [8 * (128 - int64(s/8))] ptr mem)
// large or unaligned zeroing uses a loop
(Zero [s] {t} ptr mem)
- && (s > 8*128 || config.noDuffDevice) || t.(Type).Alignment()%8 != 0 ->
- (LoweredZero [t.(Type).Alignment()]
+ && (s > 8*128 || config.noDuffDevice) || t.(*types.Type).Alignment()%8 != 0 ->
+ (LoweredZero [t.(*types.Type).Alignment()]
ptr
- (ADDVconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)])
+ (ADDVconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)])
mem)
// moves
(Move [0] _ _ mem) -> mem
(Move [1] dst src mem) -> (MOVBstore dst (MOVBload src mem) mem)
-(Move [2] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [2] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore dst (MOVHload src mem) mem)
(Move [2] dst src mem) ->
(MOVBstore [1] dst (MOVBload [1] src mem)
(MOVBstore dst (MOVBload src mem) mem))
-(Move [4] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore dst (MOVWload src mem) mem)
-(Move [4] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [4] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [2] dst (MOVHload [2] src mem)
(MOVHstore dst (MOVHload src mem) mem))
(Move [4] dst src mem) ->
(MOVBstore [2] dst (MOVBload [2] src mem)
(MOVBstore [1] dst (MOVBload [1] src mem)
(MOVBstore dst (MOVBload src mem) mem))))
-(Move [8] {t} dst src mem) && t.(Type).Alignment()%8 == 0 ->
+(Move [8] {t} dst src mem) && t.(*types.Type).Alignment()%8 == 0 ->
(MOVVstore dst (MOVVload src mem) mem)
-(Move [8] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [8] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [4] dst (MOVWload [4] src mem)
(MOVWstore dst (MOVWload src mem) mem))
-(Move [8] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [8] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [6] dst (MOVHload [6] src mem)
(MOVHstore [4] dst (MOVHload [4] src mem)
(MOVHstore [2] dst (MOVHload [2] src mem)
(MOVBstore [2] dst (MOVBload [2] src mem)
(MOVBstore [1] dst (MOVBload [1] src mem)
(MOVBstore dst (MOVBload src mem) mem)))
-(Move [6] {t} dst src mem) && t.(Type).Alignment()%2 == 0 ->
+(Move [6] {t} dst src mem) && t.(*types.Type).Alignment()%2 == 0 ->
(MOVHstore [4] dst (MOVHload [4] src mem)
(MOVHstore [2] dst (MOVHload [2] src mem)
(MOVHstore dst (MOVHload src mem) mem)))
-(Move [12] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [12] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVWstore [8] dst (MOVWload [8] src mem)
(MOVWstore [4] dst (MOVWload [4] src mem)
(MOVWstore dst (MOVWload src mem) mem)))
-(Move [16] {t} dst src mem) && t.(Type).Alignment()%8 == 0 ->
+(Move [16] {t} dst src mem) && t.(*types.Type).Alignment()%8 == 0 ->
(MOVVstore [8] dst (MOVVload [8] src mem)
(MOVVstore dst (MOVVload src mem) mem))
-(Move [24] {t} dst src mem) && t.(Type).Alignment()%8 == 0 ->
+(Move [24] {t} dst src mem) && t.(*types.Type).Alignment()%8 == 0 ->
(MOVVstore [16] dst (MOVVload [16] src mem)
(MOVVstore [8] dst (MOVVload [8] src mem)
(MOVVstore dst (MOVVload src mem) mem)))
// large or unaligned move uses a loop
(Move [s] {t} dst src mem)
- && s > 24 || t.(Type).Alignment()%8 != 0 ->
- (LoweredMove [t.(Type).Alignment()]
+ && s > 24 || t.(*types.Type).Alignment()%8 != 0 ->
+ (LoweredMove [t.(*types.Type).Alignment()]
dst
src
- (ADDVconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)])
+ (ADDVconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)])
mem)
// calls
(Rsh8x32 x (MOVDconst [c])) && uint32(c) < 8 -> (SRAWconst (SignExt8to32 x) [c])
(Rsh8Ux32 x (MOVDconst [c])) && uint32(c) < 8 -> (SRWconst (ZeroExt8to32 x) [c])
-(Rsh64x64 x y) -> (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
-(Rsh64Ux64 x y) -> (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
-(Lsh64x64 x y) -> (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
+(Rsh64x64 x y) -> (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
+(Rsh64Ux64 x y) -> (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
+(Lsh64x64 x y) -> (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
-(Rsh32x64 x y) -> (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
-(Rsh32Ux64 x y) -> (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
-(Lsh32x64 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
+(Rsh32x64 x y) -> (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
+(Rsh32Ux64 x y) -> (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
+(Lsh32x64 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
-(Rsh16x64 x y) -> (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
-(Rsh16Ux64 x y) -> (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
-(Lsh16x64 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
+(Rsh16x64 x y) -> (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
+(Rsh16Ux64 x y) -> (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
+(Lsh16x64 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
-(Rsh8x64 x y) -> (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
-(Rsh8Ux64 x y) -> (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
-(Lsh8x64 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
+(Rsh8x64 x y) -> (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
+(Rsh8Ux64 x y) -> (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
+(Lsh8x64 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
-(Rsh64x32 x y) -> (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
-(Rsh64Ux32 x y) -> (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
-(Lsh64x32 x y) -> (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
+(Rsh64x32 x y) -> (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
+(Rsh64Ux32 x y) -> (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
+(Lsh64x32 x y) -> (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
-(Rsh32x32 x y) -> (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
-(Rsh32Ux32 x y) -> (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
-(Lsh32x32 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
+(Rsh32x32 x y) -> (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
+(Rsh32Ux32 x y) -> (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
+(Lsh32x32 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
-(Rsh16x32 x y) -> (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
-(Rsh16Ux32 x y) -> (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
-(Lsh16x32 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
+(Rsh16x32 x y) -> (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
+(Rsh16Ux32 x y) -> (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
+(Lsh16x32 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
-(Rsh8x32 x y) -> (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
-(Rsh8Ux32 x y) -> (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
-(Lsh8x32 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
+(Rsh8x32 x y) -> (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
+(Rsh8Ux32 x y) -> (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
+(Lsh8x32 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
-(Rsh64x16 x y) -> (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
-(Rsh64Ux16 x y) -> (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
-(Lsh64x16 x y) -> (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
+(Rsh64x16 x y) -> (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
+(Rsh64Ux16 x y) -> (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
+(Lsh64x16 x y) -> (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
-(Rsh32x16 x y) -> (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
-(Rsh32Ux16 x y) -> (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
-(Lsh32x16 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
+(Rsh32x16 x y) -> (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
+(Rsh32Ux16 x y) -> (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
+(Lsh32x16 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
-(Rsh16x16 x y) -> (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
-(Rsh16Ux16 x y) -> (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
-(Lsh16x16 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
+(Rsh16x16 x y) -> (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
+(Rsh16Ux16 x y) -> (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
+(Lsh16x16 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
-(Rsh8x16 x y) -> (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
-(Rsh8Ux16 x y) -> (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
-(Lsh8x16 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
+(Rsh8x16 x y) -> (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
+(Rsh8Ux16 x y) -> (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
+(Lsh8x16 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
-(Rsh64x8 x y) -> (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
-(Rsh64Ux8 x y) -> (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
-(Lsh64x8 x y) -> (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
+(Rsh64x8 x y) -> (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
+(Rsh64Ux8 x y) -> (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
+(Lsh64x8 x y) -> (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
-(Rsh32x8 x y) -> (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
-(Rsh32Ux8 x y) -> (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
-(Lsh32x8 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
+(Rsh32x8 x y) -> (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
+(Rsh32Ux8 x y) -> (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
+(Lsh32x8 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
-(Rsh16x8 x y) -> (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
-(Rsh16Ux8 x y) -> (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
-(Lsh16x8 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
+(Rsh16x8 x y) -> (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
+(Rsh16Ux8 x y) -> (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
+(Lsh16x8 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
-(Rsh8x8 x y) -> (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
-(Rsh8Ux8 x y) -> (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
-(Lsh8x8 x y) -> (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
+(Rsh8x8 x y) -> (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
+(Rsh8Ux8 x y) -> (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
+(Lsh8x8 x y) -> (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
// Cleaning up shift ops when input is masked
(MaskIfNotCarry (ADDconstForCarry [c] (ANDconst [d] _))) && c < 0 && d > 0 && c + d < 0 -> (MOVDconst [-1])
(Addr {sym} base) -> (MOVDaddr {sym} base)
// (Addr {sym} base) -> (ADDconst {sym} base)
-(OffPtr [off] ptr) -> (ADD (MOVDconst <types.Int64> [off]) ptr)
+(OffPtr [off] ptr) -> (ADD (MOVDconst <typ.Int64> [off]) ptr)
(And64 x y) -> (AND x y)
(And32 x y) -> (AND x y)
(Load <t> ptr mem) && is32BitFloat(t) -> (FMOVSload ptr mem)
(Load <t> ptr mem) && is64BitFloat(t) -> (FMOVDload ptr mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is32BitFloat(val.Type) -> (FMOVDstore ptr val mem) // glitch from (Cvt32Fto64F x) -> x -- type is wrong
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && (is64BitInt(val.Type) || isPtr(val.Type)) -> (MOVDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitInt(val.Type) -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is32BitFloat(val.Type) -> (FMOVDstore ptr val mem) // glitch from (Cvt32Fto64F x) -> x -- type is wrong
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && (is64BitInt(val.Type) || isPtr(val.Type)) -> (MOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitInt(val.Type) -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Using Zero instead of LoweredZero allows the
// target address to be folded where possible.
(Move [4] dst src mem) ->
(MOVWstore dst (MOVWZload src mem) mem)
// MOVD for load and store must have offsets that are multiple of 4
-(Move [8] {t} dst src mem) && t.(Type).Alignment()%4 == 0 ->
+(Move [8] {t} dst src mem) && t.(*types.Type).Alignment()%4 == 0 ->
(MOVDstore dst (MOVDload src mem) mem)
(Move [8] dst src mem) ->
(MOVWstore [4] dst (MOVWZload [4] src mem)
// Lowering stores
// These more-specific FP versions of Store pattern should come first.
-(Store {t} ptr val mem) && t.(Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 && is64BitFloat(val.Type) -> (FMOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 && is32BitFloat(val.Type) -> (FMOVSstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 8 -> (MOVDstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 4 -> (MOVWstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 2 -> (MOVHstore ptr val mem)
-(Store {t} ptr val mem) && t.(Type).Size() == 1 -> (MOVBstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 8 -> (MOVDstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 4 -> (MOVWstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 2 -> (MOVHstore ptr val mem)
+(Store {t} ptr val mem) && t.(*types.Type).Size() == 1 -> (MOVBstore ptr val mem)
// Lowering moves
(If (MOVDGTnoinv (MOVDconst [0]) (MOVDconst [1]) cmp) yes no) -> (GTF cmp yes no)
(If (MOVDGEnoinv (MOVDconst [0]) (MOVDconst [1]) cmp) yes no) -> (GEF cmp yes no)
-(If cond yes no) -> (NE (CMPWconst [0] (MOVBZreg <types.Bool> cond)) yes no)
+(If cond yes no) -> (NE (CMPWconst [0] (MOVBZreg <typ.Bool> cond)) yes no)
// ***************************
// Above: lowering rules
// TODO: Should the optimizations be a separate pass?
// Fold unnecessary type conversions.
-(MOVDreg <t> x) && t.Compare(x.Type) == CMPeq -> x
-(MOVDnop <t> x) && t.Compare(x.Type) == CMPeq -> x
+(MOVDreg <t> x) && t.Compare(x.Type) == types.CMPeq -> x
+(MOVDnop <t> x) && t.Compare(x.Type) == types.CMPeq -> x
// Propagate constants through type conversions.
(MOVDreg (MOVDconst [c])) -> (MOVDconst [c])
(Load <t> ptr mem) && t.IsComplex() && t.Size() == 8 ->
(ComplexMake
- (Load <types.Float32> ptr mem)
- (Load <types.Float32>
- (OffPtr <types.Float32Ptr> [4] ptr)
+ (Load <typ.Float32> ptr mem)
+ (Load <typ.Float32>
+ (OffPtr <typ.Float32Ptr> [4] ptr)
mem)
)
-(Store {t} dst (ComplexMake real imag) mem) && t.(Type).Size() == 8 ->
- (Store {types.Float32}
- (OffPtr <types.Float32Ptr> [4] dst)
+(Store {t} dst (ComplexMake real imag) mem) && t.(*types.Type).Size() == 8 ->
+ (Store {typ.Float32}
+ (OffPtr <typ.Float32Ptr> [4] dst)
imag
- (Store {types.Float32} dst real mem))
+ (Store {typ.Float32} dst real mem))
(Load <t> ptr mem) && t.IsComplex() && t.Size() == 16 ->
(ComplexMake
- (Load <types.Float64> ptr mem)
- (Load <types.Float64>
- (OffPtr <types.Float64Ptr> [8] ptr)
+ (Load <typ.Float64> ptr mem)
+ (Load <typ.Float64>
+ (OffPtr <typ.Float64Ptr> [8] ptr)
mem)
)
-(Store {t} dst (ComplexMake real imag) mem) && t.(Type).Size() == 16 ->
- (Store {types.Float64}
- (OffPtr <types.Float64Ptr> [8] dst)
+(Store {t} dst (ComplexMake real imag) mem) && t.(*types.Type).Size() == 16 ->
+ (Store {typ.Float64}
+ (OffPtr <typ.Float64Ptr> [8] dst)
imag
- (Store {types.Float64} dst real mem))
+ (Store {typ.Float64} dst real mem))
// string ops
(StringPtr (StringMake ptr _)) -> ptr
(Load <t> ptr mem) && t.IsString() ->
(StringMake
- (Load <types.BytePtr> ptr mem)
- (Load <types.Int>
- (OffPtr <types.IntPtr> [config.PtrSize] ptr)
+ (Load <typ.BytePtr> ptr mem)
+ (Load <typ.Int>
+ (OffPtr <typ.IntPtr> [config.PtrSize] ptr)
mem))
(Store dst (StringMake ptr len) mem) ->
- (Store {types.Int}
- (OffPtr <types.IntPtr> [config.PtrSize] dst)
+ (Store {typ.Int}
+ (OffPtr <typ.IntPtr> [config.PtrSize] dst)
len
- (Store {types.BytePtr} dst ptr mem))
+ (Store {typ.BytePtr} dst ptr mem))
// slice ops
(SlicePtr (SliceMake ptr _ _ )) -> ptr
(Load <t> ptr mem) && t.IsSlice() ->
(SliceMake
(Load <t.ElemType().PtrTo()> ptr mem)
- (Load <types.Int>
- (OffPtr <types.IntPtr> [config.PtrSize] ptr)
+ (Load <typ.Int>
+ (OffPtr <typ.IntPtr> [config.PtrSize] ptr)
mem)
- (Load <types.Int>
- (OffPtr <types.IntPtr> [2*config.PtrSize] ptr)
+ (Load <typ.Int>
+ (OffPtr <typ.IntPtr> [2*config.PtrSize] ptr)
mem))
(Store dst (SliceMake ptr len cap) mem) ->
- (Store {types.Int}
- (OffPtr <types.IntPtr> [2*config.PtrSize] dst)
+ (Store {typ.Int}
+ (OffPtr <typ.IntPtr> [2*config.PtrSize] dst)
cap
- (Store {types.Int}
- (OffPtr <types.IntPtr> [config.PtrSize] dst)
+ (Store {typ.Int}
+ (OffPtr <typ.IntPtr> [config.PtrSize] dst)
len
- (Store {types.BytePtr} dst ptr mem)))
+ (Store {typ.BytePtr} dst ptr mem)))
// interface ops
(ITab (IMake itab _)) -> itab
(Load <t> ptr mem) && t.IsInterface() ->
(IMake
- (Load <types.BytePtr> ptr mem)
- (Load <types.BytePtr>
- (OffPtr <types.BytePtrPtr> [config.PtrSize] ptr)
+ (Load <typ.BytePtr> ptr mem)
+ (Load <typ.BytePtr>
+ (OffPtr <typ.BytePtrPtr> [config.PtrSize] ptr)
mem))
(Store dst (IMake itab data) mem) ->
- (Store {types.BytePtr}
- (OffPtr <types.BytePtrPtr> [config.PtrSize] dst)
+ (Store {typ.BytePtr}
+ (OffPtr <typ.BytePtrPtr> [config.PtrSize] dst)
data
- (Store {types.Uintptr} dst itab mem))
+ (Store {typ.Uintptr} dst itab mem))
// This file contains rules to decompose [u]int64 types on 32-bit
// architectures. These rules work together with the decomposeBuiltIn
-// pass which handles phis of these types.
+// pass which handles phis of these typ.
(Int64Hi (Int64Make hi _)) -> hi
(Int64Lo (Int64Make _ lo)) -> lo
(Load <t> ptr mem) && is64BitInt(t) && !config.BigEndian && t.IsSigned() ->
(Int64Make
- (Load <types.Int32> (OffPtr <types.Int32Ptr> [4] ptr) mem)
- (Load <types.UInt32> ptr mem))
+ (Load <typ.Int32> (OffPtr <typ.Int32Ptr> [4] ptr) mem)
+ (Load <typ.UInt32> ptr mem))
(Load <t> ptr mem) && is64BitInt(t) && !config.BigEndian && !t.IsSigned() ->
(Int64Make
- (Load <types.UInt32> (OffPtr <types.UInt32Ptr> [4] ptr) mem)
- (Load <types.UInt32> ptr mem))
+ (Load <typ.UInt32> (OffPtr <typ.UInt32Ptr> [4] ptr) mem)
+ (Load <typ.UInt32> ptr mem))
(Load <t> ptr mem) && is64BitInt(t) && config.BigEndian && t.IsSigned() ->
(Int64Make
- (Load <types.Int32> ptr mem)
- (Load <types.UInt32> (OffPtr <types.UInt32Ptr> [4] ptr) mem))
+ (Load <typ.Int32> ptr mem)
+ (Load <typ.UInt32> (OffPtr <typ.UInt32Ptr> [4] ptr) mem))
(Load <t> ptr mem) && is64BitInt(t) && config.BigEndian && !t.IsSigned() ->
(Int64Make
- (Load <types.UInt32> ptr mem)
- (Load <types.UInt32> (OffPtr <types.UInt32Ptr> [4] ptr) mem))
+ (Load <typ.UInt32> ptr mem)
+ (Load <typ.UInt32> (OffPtr <typ.UInt32Ptr> [4] ptr) mem))
-(Store {t} dst (Int64Make hi lo) mem) && t.(Type).Size() == 8 && !config.BigEndian ->
+(Store {t} dst (Int64Make hi lo) mem) && t.(*types.Type).Size() == 8 && !config.BigEndian ->
(Store {hi.Type}
(OffPtr <hi.Type.PtrTo()> [4] dst)
hi
(Store {lo.Type} dst lo mem))
-(Store {t} dst (Int64Make hi lo) mem) && t.(Type).Size() == 8 && config.BigEndian ->
+(Store {t} dst (Int64Make hi lo) mem) && t.(*types.Type).Size() == 8 && config.BigEndian ->
(Store {lo.Type}
(OffPtr <lo.Type.PtrTo()> [4] dst)
lo
(Arg {n} [off]) && is64BitInt(v.Type) && !config.BigEndian && v.Type.IsSigned() ->
(Int64Make
- (Arg <types.Int32> {n} [off+4])
- (Arg <types.UInt32> {n} [off]))
+ (Arg <typ.Int32> {n} [off+4])
+ (Arg <typ.UInt32> {n} [off]))
(Arg {n} [off]) && is64BitInt(v.Type) && !config.BigEndian && !v.Type.IsSigned() ->
(Int64Make
- (Arg <types.UInt32> {n} [off+4])
- (Arg <types.UInt32> {n} [off]))
+ (Arg <typ.UInt32> {n} [off+4])
+ (Arg <typ.UInt32> {n} [off]))
(Arg {n} [off]) && is64BitInt(v.Type) && config.BigEndian && v.Type.IsSigned() ->
(Int64Make
- (Arg <types.Int32> {n} [off])
- (Arg <types.UInt32> {n} [off+4]))
+ (Arg <typ.Int32> {n} [off])
+ (Arg <typ.UInt32> {n} [off+4]))
(Arg {n} [off]) && is64BitInt(v.Type) && config.BigEndian && !v.Type.IsSigned() ->
(Int64Make
- (Arg <types.UInt32> {n} [off])
- (Arg <types.UInt32> {n} [off+4]))
+ (Arg <typ.UInt32> {n} [off])
+ (Arg <typ.UInt32> {n} [off+4]))
(Add64 x y) ->
(Int64Make
- (Add32withcarry <types.Int32>
+ (Add32withcarry <typ.Int32>
(Int64Hi x)
(Int64Hi y)
- (Select1 <TypeFlags> (Add32carry (Int64Lo x) (Int64Lo y))))
- (Select0 <types.UInt32> (Add32carry (Int64Lo x) (Int64Lo y))))
+ (Select1 <types.TypeFlags> (Add32carry (Int64Lo x) (Int64Lo y))))
+ (Select0 <typ.UInt32> (Add32carry (Int64Lo x) (Int64Lo y))))
(Sub64 x y) ->
(Int64Make
- (Sub32withcarry <types.Int32>
+ (Sub32withcarry <typ.Int32>
(Int64Hi x)
(Int64Hi y)
- (Select1 <TypeFlags> (Sub32carry (Int64Lo x) (Int64Lo y))))
- (Select0 <types.UInt32> (Sub32carry (Int64Lo x) (Int64Lo y))))
+ (Select1 <types.TypeFlags> (Sub32carry (Int64Lo x) (Int64Lo y))))
+ (Select0 <typ.UInt32> (Sub32carry (Int64Lo x) (Int64Lo y))))
(Mul64 x y) ->
(Int64Make
- (Add32 <types.UInt32>
- (Mul32 <types.UInt32> (Int64Lo x) (Int64Hi y))
- (Add32 <types.UInt32>
- (Mul32 <types.UInt32> (Int64Hi x) (Int64Lo y))
- (Select0 <types.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y)))))
- (Select1 <types.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y))))
+ (Add32 <typ.UInt32>
+ (Mul32 <typ.UInt32> (Int64Lo x) (Int64Hi y))
+ (Add32 <typ.UInt32>
+ (Mul32 <typ.UInt32> (Int64Hi x) (Int64Lo y))
+ (Select0 <typ.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y)))))
+ (Select1 <typ.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y))))
(And64 x y) ->
(Int64Make
- (And32 <types.UInt32> (Int64Hi x) (Int64Hi y))
- (And32 <types.UInt32> (Int64Lo x) (Int64Lo y)))
+ (And32 <typ.UInt32> (Int64Hi x) (Int64Hi y))
+ (And32 <typ.UInt32> (Int64Lo x) (Int64Lo y)))
(Or64 x y) ->
(Int64Make
- (Or32 <types.UInt32> (Int64Hi x) (Int64Hi y))
- (Or32 <types.UInt32> (Int64Lo x) (Int64Lo y)))
+ (Or32 <typ.UInt32> (Int64Hi x) (Int64Hi y))
+ (Or32 <typ.UInt32> (Int64Lo x) (Int64Lo y)))
(Xor64 x y) ->
(Int64Make
- (Xor32 <types.UInt32> (Int64Hi x) (Int64Hi y))
- (Xor32 <types.UInt32> (Int64Lo x) (Int64Lo y)))
+ (Xor32 <typ.UInt32> (Int64Hi x) (Int64Hi y))
+ (Xor32 <typ.UInt32> (Int64Lo x) (Int64Lo y)))
(Neg64 <t> x) -> (Sub64 (Const64 <t> [0]) x)
(Com64 x) ->
(Int64Make
- (Com32 <types.UInt32> (Int64Hi x))
- (Com32 <types.UInt32> (Int64Lo x)))
+ (Com32 <typ.UInt32> (Int64Hi x))
+ (Com32 <typ.UInt32> (Int64Lo x)))
(Ctz64 x) ->
- (Add32 <types.UInt32>
- (Ctz32 <types.UInt32> (Int64Lo x))
- (And32 <types.UInt32>
- (Com32 <types.UInt32> (Zeromask (Int64Lo x)))
- (Ctz32 <types.UInt32> (Int64Hi x))))
+ (Add32 <typ.UInt32>
+ (Ctz32 <typ.UInt32> (Int64Lo x))
+ (And32 <typ.UInt32>
+ (Com32 <typ.UInt32> (Zeromask (Int64Lo x)))
+ (Ctz32 <typ.UInt32> (Int64Hi x))))
(BitLen64 x) ->
- (Add32 <types.Int>
- (BitLen32 <types.Int> (Int64Hi x))
- (BitLen32 <types.Int>
- (Or32 <types.UInt32>
+ (Add32 <typ.Int>
+ (BitLen32 <typ.Int> (Int64Hi x))
+ (BitLen32 <typ.Int>
+ (Or32 <typ.UInt32>
(Int64Lo x)
(Zeromask (Int64Hi x)))))
(Bswap64 x) ->
(Int64Make
- (Bswap32 <types.UInt32> (Int64Lo x))
- (Bswap32 <types.UInt32> (Int64Hi x)))
+ (Bswap32 <typ.UInt32> (Int64Lo x))
+ (Bswap32 <typ.UInt32> (Int64Hi x)))
(SignExt32to64 x) -> (Int64Make (Signmask x) x)
(SignExt16to64 x) -> (SignExt32to64 (SignExt16to32 x))
(SignExt8to64 x) -> (SignExt32to64 (SignExt8to32 x))
-(ZeroExt32to64 x) -> (Int64Make (Const32 <types.UInt32> [0]) x)
+(ZeroExt32to64 x) -> (Int64Make (Const32 <typ.UInt32> [0]) x)
(ZeroExt16to64 x) -> (ZeroExt32to64 (ZeroExt16to32 x))
(ZeroExt8to64 x) -> (ZeroExt32to64 (ZeroExt8to32 x))
// turn x64 non-constant shifts to x32 shifts
// if high 32-bit of the shift is nonzero, make a huge shift
(Lsh64x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Lsh64x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Lsh64x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh64x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh64x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh64x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh64Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh64Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh64Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Lsh32x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Lsh32x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Lsh32x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh32x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh32x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh32x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh32Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh32Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh32Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Lsh16x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Lsh16x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Lsh16x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh16x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh16x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh16x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh16Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh16Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh16Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Lsh8x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Lsh8x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Lsh8x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh8x64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh8x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh8x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
(Rsh8Ux64 x (Int64Make hi lo)) && hi.Op != OpConst32 ->
- (Rsh8Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ (Rsh8Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
// 64x left shift
// result.hi = hi<<s | lo>>(32-s) | lo<<(s-32) // >> is unsigned, large shifts result 0
// result.lo = lo<<s
(Lsh64x32 (Int64Make hi lo) s) ->
(Int64Make
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Lsh32x32 <types.UInt32> hi s)
- (Rsh32Ux32 <types.UInt32>
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Lsh32x32 <typ.UInt32> hi s)
+ (Rsh32Ux32 <typ.UInt32>
lo
- (Sub32 <types.UInt32> (Const32 <types.UInt32> [32]) s)))
- (Lsh32x32 <types.UInt32>
+ (Sub32 <typ.UInt32> (Const32 <typ.UInt32> [32]) s)))
+ (Lsh32x32 <typ.UInt32>
lo
- (Sub32 <types.UInt32> s (Const32 <types.UInt32> [32]))))
- (Lsh32x32 <types.UInt32> lo s))
+ (Sub32 <typ.UInt32> s (Const32 <typ.UInt32> [32]))))
+ (Lsh32x32 <typ.UInt32> lo s))
(Lsh64x16 (Int64Make hi lo) s) ->
(Int64Make
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Lsh32x16 <types.UInt32> hi s)
- (Rsh32Ux16 <types.UInt32>
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Lsh32x16 <typ.UInt32> hi s)
+ (Rsh32Ux16 <typ.UInt32>
lo
- (Sub16 <types.UInt16> (Const16 <types.UInt16> [32]) s)))
- (Lsh32x16 <types.UInt32>
+ (Sub16 <typ.UInt16> (Const16 <typ.UInt16> [32]) s)))
+ (Lsh32x16 <typ.UInt32>
lo
- (Sub16 <types.UInt16> s (Const16 <types.UInt16> [32]))))
- (Lsh32x16 <types.UInt32> lo s))
+ (Sub16 <typ.UInt16> s (Const16 <typ.UInt16> [32]))))
+ (Lsh32x16 <typ.UInt32> lo s))
(Lsh64x8 (Int64Make hi lo) s) ->
(Int64Make
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Lsh32x8 <types.UInt32> hi s)
- (Rsh32Ux8 <types.UInt32>
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Lsh32x8 <typ.UInt32> hi s)
+ (Rsh32Ux8 <typ.UInt32>
lo
- (Sub8 <types.UInt8> (Const8 <types.UInt8> [32]) s)))
- (Lsh32x8 <types.UInt32>
+ (Sub8 <typ.UInt8> (Const8 <typ.UInt8> [32]) s)))
+ (Lsh32x8 <typ.UInt32>
lo
- (Sub8 <types.UInt8> s (Const8 <types.UInt8> [32]))))
- (Lsh32x8 <types.UInt32> lo s))
+ (Sub8 <typ.UInt8> s (Const8 <typ.UInt8> [32]))))
+ (Lsh32x8 <typ.UInt32> lo s))
// 64x unsigned right shift
// result.hi = hi>>s
// result.lo = lo>>s | hi<<(32-s) | hi>>(s-32) // >> is unsigned, large shifts result 0
(Rsh64Ux32 (Int64Make hi lo) s) ->
(Int64Make
- (Rsh32Ux32 <types.UInt32> hi s)
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Rsh32Ux32 <types.UInt32> lo s)
- (Lsh32x32 <types.UInt32>
+ (Rsh32Ux32 <typ.UInt32> hi s)
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Rsh32Ux32 <typ.UInt32> lo s)
+ (Lsh32x32 <typ.UInt32>
hi
- (Sub32 <types.UInt32> (Const32 <types.UInt32> [32]) s)))
- (Rsh32Ux32 <types.UInt32>
+ (Sub32 <typ.UInt32> (Const32 <typ.UInt32> [32]) s)))
+ (Rsh32Ux32 <typ.UInt32>
hi
- (Sub32 <types.UInt32> s (Const32 <types.UInt32> [32])))))
+ (Sub32 <typ.UInt32> s (Const32 <typ.UInt32> [32])))))
(Rsh64Ux16 (Int64Make hi lo) s) ->
(Int64Make
- (Rsh32Ux16 <types.UInt32> hi s)
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Rsh32Ux16 <types.UInt32> lo s)
- (Lsh32x16 <types.UInt32>
+ (Rsh32Ux16 <typ.UInt32> hi s)
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Rsh32Ux16 <typ.UInt32> lo s)
+ (Lsh32x16 <typ.UInt32>
hi
- (Sub16 <types.UInt16> (Const16 <types.UInt16> [32]) s)))
- (Rsh32Ux16 <types.UInt32>
+ (Sub16 <typ.UInt16> (Const16 <typ.UInt16> [32]) s)))
+ (Rsh32Ux16 <typ.UInt32>
hi
- (Sub16 <types.UInt16> s (Const16 <types.UInt16> [32])))))
+ (Sub16 <typ.UInt16> s (Const16 <typ.UInt16> [32])))))
(Rsh64Ux8 (Int64Make hi lo) s) ->
(Int64Make
- (Rsh32Ux8 <types.UInt32> hi s)
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Rsh32Ux8 <types.UInt32> lo s)
- (Lsh32x8 <types.UInt32>
+ (Rsh32Ux8 <typ.UInt32> hi s)
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Rsh32Ux8 <typ.UInt32> lo s)
+ (Lsh32x8 <typ.UInt32>
hi
- (Sub8 <types.UInt8> (Const8 <types.UInt8> [32]) s)))
- (Rsh32Ux8 <types.UInt32>
+ (Sub8 <typ.UInt8> (Const8 <typ.UInt8> [32]) s)))
+ (Rsh32Ux8 <typ.UInt32>
hi
- (Sub8 <types.UInt8> s (Const8 <types.UInt8> [32])))))
+ (Sub8 <typ.UInt8> s (Const8 <typ.UInt8> [32])))))
// 64x signed right shift
// result.hi = hi>>s
// result.lo = lo>>s | hi<<(32-s) | (hi>>(s-32))&zeromask(s>>5) // hi>>(s-32) is signed, large shifts result 0/-1
(Rsh64x32 (Int64Make hi lo) s) ->
(Int64Make
- (Rsh32x32 <types.UInt32> hi s)
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Rsh32Ux32 <types.UInt32> lo s)
- (Lsh32x32 <types.UInt32>
+ (Rsh32x32 <typ.UInt32> hi s)
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Rsh32Ux32 <typ.UInt32> lo s)
+ (Lsh32x32 <typ.UInt32>
hi
- (Sub32 <types.UInt32> (Const32 <types.UInt32> [32]) s)))
- (And32 <types.UInt32>
- (Rsh32x32 <types.UInt32>
+ (Sub32 <typ.UInt32> (Const32 <typ.UInt32> [32]) s)))
+ (And32 <typ.UInt32>
+ (Rsh32x32 <typ.UInt32>
hi
- (Sub32 <types.UInt32> s (Const32 <types.UInt32> [32])))
+ (Sub32 <typ.UInt32> s (Const32 <typ.UInt32> [32])))
(Zeromask
- (Rsh32Ux32 <types.UInt32> s (Const32 <types.UInt32> [5]))))))
+ (Rsh32Ux32 <typ.UInt32> s (Const32 <typ.UInt32> [5]))))))
(Rsh64x16 (Int64Make hi lo) s) ->
(Int64Make
- (Rsh32x16 <types.UInt32> hi s)
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Rsh32Ux16 <types.UInt32> lo s)
- (Lsh32x16 <types.UInt32>
+ (Rsh32x16 <typ.UInt32> hi s)
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Rsh32Ux16 <typ.UInt32> lo s)
+ (Lsh32x16 <typ.UInt32>
hi
- (Sub16 <types.UInt16> (Const16 <types.UInt16> [32]) s)))
- (And32 <types.UInt32>
- (Rsh32x16 <types.UInt32>
+ (Sub16 <typ.UInt16> (Const16 <typ.UInt16> [32]) s)))
+ (And32 <typ.UInt32>
+ (Rsh32x16 <typ.UInt32>
hi
- (Sub16 <types.UInt16> s (Const16 <types.UInt16> [32])))
+ (Sub16 <typ.UInt16> s (Const16 <typ.UInt16> [32])))
(Zeromask
(ZeroExt16to32
- (Rsh16Ux32 <types.UInt16> s (Const32 <types.UInt32> [5])))))))
+ (Rsh16Ux32 <typ.UInt16> s (Const32 <typ.UInt32> [5])))))))
(Rsh64x8 (Int64Make hi lo) s) ->
(Int64Make
- (Rsh32x8 <types.UInt32> hi s)
- (Or32 <types.UInt32>
- (Or32 <types.UInt32>
- (Rsh32Ux8 <types.UInt32> lo s)
- (Lsh32x8 <types.UInt32>
+ (Rsh32x8 <typ.UInt32> hi s)
+ (Or32 <typ.UInt32>
+ (Or32 <typ.UInt32>
+ (Rsh32Ux8 <typ.UInt32> lo s)
+ (Lsh32x8 <typ.UInt32>
hi
- (Sub8 <types.UInt8> (Const8 <types.UInt8> [32]) s)))
- (And32 <types.UInt32>
- (Rsh32x8 <types.UInt32>
+ (Sub8 <typ.UInt8> (Const8 <typ.UInt8> [32]) s)))
+ (And32 <typ.UInt32>
+ (Rsh32x8 <typ.UInt32>
hi
- (Sub8 <types.UInt8> s (Const8 <types.UInt8> [32])))
+ (Sub8 <typ.UInt8> s (Const8 <typ.UInt8> [32])))
(Zeromask
(ZeroExt8to32
- (Rsh8Ux32 <types.UInt8> s (Const32 <types.UInt32> [5])))))))
+ (Rsh8Ux32 <typ.UInt8> s (Const32 <typ.UInt32> [5])))))))
// 64xConst32 shifts
// we probably do not need them -- lateopt may take care of them just fine
//
//(Lsh64x32 x (Const32 [c])) && c < 64 && c > 32 ->
// (Int64Make
-// (Lsh32x32 <types.UInt32> (Int64Lo x) (Const32 <types.UInt32> [c-32]))
-// (Const32 <types.UInt32> [0]))
+// (Lsh32x32 <typ.UInt32> (Int64Lo x) (Const32 <typ.UInt32> [c-32]))
+// (Const32 <typ.UInt32> [0]))
//(Rsh64x32 x (Const32 [c])) && c < 64 && c > 32 ->
// (Int64Make
// (Signmask (Int64Hi x))
-// (Rsh32x32 <types.Int32> (Int64Hi x) (Const32 <types.UInt32> [c-32])))
+// (Rsh32x32 <typ.Int32> (Int64Hi x) (Const32 <typ.UInt32> [c-32])))
//(Rsh64Ux32 x (Const32 [c])) && c < 64 && c > 32 ->
// (Int64Make
-// (Const32 <types.UInt32> [0])
-// (Rsh32Ux32 <types.UInt32> (Int64Hi x) (Const32 <types.UInt32> [c-32])))
+// (Const32 <typ.UInt32> [0])
+// (Rsh32Ux32 <typ.UInt32> (Int64Hi x) (Const32 <typ.UInt32> [c-32])))
//
-//(Lsh64x32 x (Const32 [32])) -> (Int64Make (Int64Lo x) (Const32 <types.UInt32> [0]))
+//(Lsh64x32 x (Const32 [32])) -> (Int64Make (Int64Lo x) (Const32 <typ.UInt32> [0]))
//(Rsh64x32 x (Const32 [32])) -> (Int64Make (Signmask (Int64Hi x)) (Int64Hi x))
-//(Rsh64Ux32 x (Const32 [32])) -> (Int64Make (Const32 <types.UInt32> [0]) (Int64Hi x))
+//(Rsh64Ux32 x (Const32 [32])) -> (Int64Make (Const32 <typ.UInt32> [0]) (Int64Hi x))
//
//(Lsh64x32 x (Const32 [c])) && c < 32 && c > 0 ->
// (Int64Make
-// (Or32 <types.UInt32>
-// (Lsh32x32 <types.UInt32> (Int64Hi x) (Const32 <types.UInt32> [c]))
-// (Rsh32Ux32 <types.UInt32> (Int64Lo x) (Const32 <types.UInt32> [32-c])))
-// (Lsh32x32 <types.UInt32> (Int64Lo x) (Const32 <types.UInt32> [c])))
+// (Or32 <typ.UInt32>
+// (Lsh32x32 <typ.UInt32> (Int64Hi x) (Const32 <typ.UInt32> [c]))
+// (Rsh32Ux32 <typ.UInt32> (Int64Lo x) (Const32 <typ.UInt32> [32-c])))
+// (Lsh32x32 <typ.UInt32> (Int64Lo x) (Const32 <typ.UInt32> [c])))
//(Rsh64x32 x (Const32 [c])) && c < 32 && c > 0 ->
// (Int64Make
-// (Rsh32x32 <types.Int32> (Int64Hi x) (Const32 <types.UInt32> [c]))
-// (Or32 <types.UInt32>
-// (Rsh32Ux32 <types.UInt32> (Int64Lo x) (Const32 <types.UInt32> [c]))
-// (Lsh32x32 <types.UInt32> (Int64Hi x) (Const32 <types.UInt32> [32-c]))))
+// (Rsh32x32 <typ.Int32> (Int64Hi x) (Const32 <typ.UInt32> [c]))
+// (Or32 <typ.UInt32>
+// (Rsh32Ux32 <typ.UInt32> (Int64Lo x) (Const32 <typ.UInt32> [c]))
+// (Lsh32x32 <typ.UInt32> (Int64Hi x) (Const32 <typ.UInt32> [32-c]))))
//(Rsh64Ux32 x (Const32 [c])) && c < 32 && c > 0 ->
// (Int64Make
-// (Rsh32Ux32 <types.UInt32> (Int64Hi x) (Const32 <types.UInt32> [c]))
-// (Or32 <types.UInt32>
-// (Rsh32Ux32 <types.UInt32> (Int64Lo x) (Const32 <types.UInt32> [c]))
-// (Lsh32x32 <types.UInt32> (Int64Hi x) (Const32 <types.UInt32> [32-c]))))
+// (Rsh32Ux32 <typ.UInt32> (Int64Hi x) (Const32 <typ.UInt32> [c]))
+// (Or32 <typ.UInt32>
+// (Rsh32Ux32 <typ.UInt32> (Int64Lo x) (Const32 <typ.UInt32> [c]))
+// (Lsh32x32 <typ.UInt32> (Int64Hi x) (Const32 <typ.UInt32> [32-c]))))
//
//(Lsh64x32 x (Const32 [0])) -> x
//(Rsh64x32 x (Const32 [0])) -> x
//(Rsh64Ux32 x (Const32 [0])) -> x
(Const64 <t> [c]) && t.IsSigned() ->
- (Int64Make (Const32 <types.Int32> [c>>32]) (Const32 <types.UInt32> [int64(int32(c))]))
+ (Int64Make (Const32 <typ.Int32> [c>>32]) (Const32 <typ.UInt32> [int64(int32(c))]))
(Const64 <t> [c]) && !t.IsSigned() ->
- (Int64Make (Const32 <types.UInt32> [c>>32]) (Const32 <types.UInt32> [int64(int32(c))]))
+ (Int64Make (Const32 <typ.UInt32> [c>>32]) (Const32 <typ.UInt32> [int64(int32(c))]))
(Eq64 x y) ->
(AndB
(Mul64 (Const64 [-1]) x) -> (Neg64 x)
// Convert multiplication by a power of two to a shift.
-(Mul8 <t> n (Const8 [c])) && isPowerOfTwo(c) -> (Lsh8x64 <t> n (Const64 <types.UInt64> [log2(c)]))
-(Mul16 <t> n (Const16 [c])) && isPowerOfTwo(c) -> (Lsh16x64 <t> n (Const64 <types.UInt64> [log2(c)]))
-(Mul32 <t> n (Const32 [c])) && isPowerOfTwo(c) -> (Lsh32x64 <t> n (Const64 <types.UInt64> [log2(c)]))
-(Mul64 <t> n (Const64 [c])) && isPowerOfTwo(c) -> (Lsh64x64 <t> n (Const64 <types.UInt64> [log2(c)]))
-(Mul8 <t> n (Const8 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg8 (Lsh8x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
-(Mul16 <t> n (Const16 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg16 (Lsh16x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
-(Mul32 <t> n (Const32 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg32 (Lsh32x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
-(Mul64 <t> n (Const64 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg64 (Lsh64x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+(Mul8 <t> n (Const8 [c])) && isPowerOfTwo(c) -> (Lsh8x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
+(Mul16 <t> n (Const16 [c])) && isPowerOfTwo(c) -> (Lsh16x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
+(Mul32 <t> n (Const32 [c])) && isPowerOfTwo(c) -> (Lsh32x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
+(Mul64 <t> n (Const64 [c])) && isPowerOfTwo(c) -> (Lsh64x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
+(Mul8 <t> n (Const8 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg8 (Lsh8x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
+(Mul16 <t> n (Const16 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg16 (Lsh16x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
+(Mul32 <t> n (Const32 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg32 (Lsh32x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
+(Mul64 <t> n (Const64 [c])) && t.IsSigned() && isPowerOfTwo(-c) -> (Neg64 (Lsh64x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
(Mod8 (Const8 [c]) (Const8 [d])) && d != 0 -> (Const8 [int64(int8(c % d))])
(Mod16 (Const16 [c]) (Const16 [d])) && d != 0 -> (Const16 [int64(int16(c % d))])
// ((x >> c1) << c2) >> c3
(Rsh64Ux64 (Lsh64x64 (Rsh64Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Rsh64Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Rsh64Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
(Rsh32Ux64 (Lsh32x64 (Rsh32Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Rsh32Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Rsh32Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
(Rsh16Ux64 (Lsh16x64 (Rsh16Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Rsh16Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Rsh16Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
(Rsh8Ux64 (Lsh8x64 (Rsh8Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Rsh8Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Rsh8Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
// ((x << c1) >> c2) << c3
(Lsh64x64 (Rsh64Ux64 (Lsh64x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Lsh64x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Lsh64x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
(Lsh32x64 (Rsh32Ux64 (Lsh32x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Lsh32x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Lsh32x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
(Lsh16x64 (Rsh16Ux64 (Lsh16x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Lsh16x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Lsh16x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
(Lsh8x64 (Rsh8Ux64 (Lsh8x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
&& uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- -> (Lsh8x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ -> (Lsh8x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
// replace shifts with zero extensions
-(Rsh16Ux64 (Lsh16x64 x (Const64 [8])) (Const64 [8])) -> (ZeroExt8to16 (Trunc16to8 <types.UInt8> x))
-(Rsh32Ux64 (Lsh32x64 x (Const64 [24])) (Const64 [24])) -> (ZeroExt8to32 (Trunc32to8 <types.UInt8> x))
-(Rsh64Ux64 (Lsh64x64 x (Const64 [56])) (Const64 [56])) -> (ZeroExt8to64 (Trunc64to8 <types.UInt8> x))
-(Rsh32Ux64 (Lsh32x64 x (Const64 [16])) (Const64 [16])) -> (ZeroExt16to32 (Trunc32to16 <types.UInt16> x))
-(Rsh64Ux64 (Lsh64x64 x (Const64 [48])) (Const64 [48])) -> (ZeroExt16to64 (Trunc64to16 <types.UInt16> x))
-(Rsh64Ux64 (Lsh64x64 x (Const64 [32])) (Const64 [32])) -> (ZeroExt32to64 (Trunc64to32 <types.UInt32> x))
+(Rsh16Ux64 (Lsh16x64 x (Const64 [8])) (Const64 [8])) -> (ZeroExt8to16 (Trunc16to8 <typ.UInt8> x))
+(Rsh32Ux64 (Lsh32x64 x (Const64 [24])) (Const64 [24])) -> (ZeroExt8to32 (Trunc32to8 <typ.UInt8> x))
+(Rsh64Ux64 (Lsh64x64 x (Const64 [56])) (Const64 [56])) -> (ZeroExt8to64 (Trunc64to8 <typ.UInt8> x))
+(Rsh32Ux64 (Lsh32x64 x (Const64 [16])) (Const64 [16])) -> (ZeroExt16to32 (Trunc32to16 <typ.UInt16> x))
+(Rsh64Ux64 (Lsh64x64 x (Const64 [48])) (Const64 [48])) -> (ZeroExt16to64 (Trunc64to16 <typ.UInt16> x))
+(Rsh64Ux64 (Lsh64x64 x (Const64 [32])) (Const64 [32])) -> (ZeroExt32to64 (Trunc64to32 <typ.UInt32> x))
// replace shifts with sign extensions
-(Rsh16x64 (Lsh16x64 x (Const64 [8])) (Const64 [8])) -> (SignExt8to16 (Trunc16to8 <types.Int8> x))
-(Rsh32x64 (Lsh32x64 x (Const64 [24])) (Const64 [24])) -> (SignExt8to32 (Trunc32to8 <types.Int8> x))
-(Rsh64x64 (Lsh64x64 x (Const64 [56])) (Const64 [56])) -> (SignExt8to64 (Trunc64to8 <types.Int8> x))
-(Rsh32x64 (Lsh32x64 x (Const64 [16])) (Const64 [16])) -> (SignExt16to32 (Trunc32to16 <types.Int16> x))
-(Rsh64x64 (Lsh64x64 x (Const64 [48])) (Const64 [48])) -> (SignExt16to64 (Trunc64to16 <types.Int16> x))
-(Rsh64x64 (Lsh64x64 x (Const64 [32])) (Const64 [32])) -> (SignExt32to64 (Trunc64to32 <types.Int32> x))
+(Rsh16x64 (Lsh16x64 x (Const64 [8])) (Const64 [8])) -> (SignExt8to16 (Trunc16to8 <typ.Int8> x))
+(Rsh32x64 (Lsh32x64 x (Const64 [24])) (Const64 [24])) -> (SignExt8to32 (Trunc32to8 <typ.Int8> x))
+(Rsh64x64 (Lsh64x64 x (Const64 [56])) (Const64 [56])) -> (SignExt8to64 (Trunc64to8 <typ.Int8> x))
+(Rsh32x64 (Lsh32x64 x (Const64 [16])) (Const64 [16])) -> (SignExt16to32 (Trunc32to16 <typ.Int16> x))
+(Rsh64x64 (Lsh64x64 x (Const64 [48])) (Const64 [48])) -> (SignExt16to64 (Trunc64to16 <typ.Int16> x))
+(Rsh64x64 (Lsh64x64 x (Const64 [32])) (Const64 [32])) -> (SignExt32to64 (Trunc64to32 <typ.Int32> x))
// constant comparisons
(Eq64 (Const64 [c]) (Const64 [d])) -> (ConstBool [b2i(c == d)])
(NeqSlice x y) -> (NeqPtr (SlicePtr x) (SlicePtr y))
// Load of store of same address, with compatibly typed value and same size
-(Load <t1> p1 (Store {t2} p2 x _)) && isSamePtr(p1,p2) && t1.Compare(x.Type)==CMPeq && t1.Size() == t2.(Type).Size() -> x
+(Load <t1> p1 (Store {t2} p2 x _)) && isSamePtr(p1,p2) && t1.Compare(x.Type) == types.CMPeq && t1.Size() == t2.(*types.Type).Size() -> x
// Collapse OffPtr
(OffPtr (OffPtr p [b]) [a]) -> (OffPtr p [a+b])
-(OffPtr p [0]) && v.Type.Compare(p.Type) == CMPeq -> p
+(OffPtr p [0]) && v.Type.Compare(p.Type) == types.CMPeq -> p
// indexing operations
// Note: bounds check has already been done
-(PtrIndex <t> ptr idx) && config.PtrSize == 4 -> (AddPtr ptr (Mul32 <types.Int> idx (Const32 <types.Int> [t.ElemType().Size()])))
-(PtrIndex <t> ptr idx) && config.PtrSize == 8 -> (AddPtr ptr (Mul64 <types.Int> idx (Const64 <types.Int> [t.ElemType().Size()])))
+(PtrIndex <t> ptr idx) && config.PtrSize == 4 -> (AddPtr ptr (Mul32 <typ.Int> idx (Const32 <typ.Int> [t.ElemType().Size()])))
+(PtrIndex <t> ptr idx) && config.PtrSize == 8 -> (AddPtr ptr (Mul64 <typ.Int> idx (Const64 <typ.Int> [t.ElemType().Size()])))
// struct operations
(StructSelect (StructMake1 x)) -> x
(StructSelect [0] x:(IData _)) -> x
// un-SSAable values use mem->mem copies
-(Store {t} dst (Load src mem) mem) && !fe.CanSSA(t.(Type)) ->
- (Move {t} [t.(Type).Size()] dst src mem)
-(Store {t} dst (Load src mem) (VarDef {x} mem)) && !fe.CanSSA(t.(Type)) ->
- (Move {t} [t.(Type).Size()] dst src (VarDef {x} mem))
+(Store {t} dst (Load src mem) mem) && !fe.CanSSA(t.(*types.Type)) ->
+ (Move {t} [t.(*types.Type).Size()] dst src mem)
+(Store {t} dst (Load src mem) (VarDef {x} mem)) && !fe.CanSSA(t.(*types.Type)) ->
+ (Move {t} [t.(*types.Type).Size()] dst src (VarDef {x} mem))
// array ops
(ArraySelect (ArrayMake1 x)) -> x
(StringPtr (StringMake (Const64 <t> [c]) _)) -> (Const64 <t> [c])
(StringLen (StringMake _ (Const64 <t> [c]))) -> (Const64 <t> [c])
(ConstString {s}) && config.PtrSize == 4 && s.(string) == "" ->
- (StringMake (ConstNil) (Const32 <types.Int> [0]))
+ (StringMake (ConstNil) (Const32 <typ.Int> [0]))
(ConstString {s}) && config.PtrSize == 8 && s.(string) == "" ->
- (StringMake (ConstNil) (Const64 <types.Int> [0]))
+ (StringMake (ConstNil) (Const64 <typ.Int> [0]))
(ConstString {s}) && config.PtrSize == 4 && s.(string) != "" ->
(StringMake
- (Addr <types.BytePtr> {fe.StringData(s.(string))}
+ (Addr <typ.BytePtr> {fe.StringData(s.(string))}
(SB))
- (Const32 <types.Int> [int64(len(s.(string)))]))
+ (Const32 <typ.Int> [int64(len(s.(string)))]))
(ConstString {s}) && config.PtrSize == 8 && s.(string) != "" ->
(StringMake
- (Addr <types.BytePtr> {fe.StringData(s.(string))}
+ (Addr <typ.BytePtr> {fe.StringData(s.(string))}
(SB))
- (Const64 <types.Int> [int64(len(s.(string)))]))
+ (Const64 <typ.Int> [int64(len(s.(string)))]))
// slice ops
// Only a few slice rules are provided here. See dec.rules for
(ConstSlice) && config.PtrSize == 4 ->
(SliceMake
(ConstNil <v.Type.ElemType().PtrTo()>)
- (Const32 <types.Int> [0])
- (Const32 <types.Int> [0]))
+ (Const32 <typ.Int> [0])
+ (Const32 <typ.Int> [0]))
(ConstSlice) && config.PtrSize == 8 ->
(SliceMake
(ConstNil <v.Type.ElemType().PtrTo()>)
- (Const64 <types.Int> [0])
- (Const64 <types.Int> [0]))
+ (Const64 <typ.Int> [0])
+ (Const64 <typ.Int> [0]))
// interface ops
(ConstInterface) ->
(IMake
- (ConstNil <types.BytePtr>)
- (ConstNil <types.BytePtr>))
+ (ConstNil <typ.BytePtr>)
+ (ConstNil <typ.BytePtr>))
(NilCheck (GetG mem) mem) -> mem
// Decompose compound argument values
(Arg {n} [off]) && v.Type.IsString() ->
(StringMake
- (Arg <types.BytePtr> {n} [off])
- (Arg <types.Int> {n} [off+config.PtrSize]))
+ (Arg <typ.BytePtr> {n} [off])
+ (Arg <typ.Int> {n} [off+config.PtrSize]))
(Arg {n} [off]) && v.Type.IsSlice() ->
(SliceMake
(Arg <v.Type.ElemType().PtrTo()> {n} [off])
- (Arg <types.Int> {n} [off+config.PtrSize])
- (Arg <types.Int> {n} [off+2*config.PtrSize]))
+ (Arg <typ.Int> {n} [off+config.PtrSize])
+ (Arg <typ.Int> {n} [off+2*config.PtrSize]))
(Arg {n} [off]) && v.Type.IsInterface() ->
(IMake
- (Arg <types.BytePtr> {n} [off])
- (Arg <types.BytePtr> {n} [off+config.PtrSize]))
+ (Arg <typ.BytePtr> {n} [off])
+ (Arg <typ.BytePtr> {n} [off+config.PtrSize]))
(Arg {n} [off]) && v.Type.IsComplex() && v.Type.Size() == 16 ->
(ComplexMake
- (Arg <types.Float64> {n} [off])
- (Arg <types.Float64> {n} [off+8]))
+ (Arg <typ.Float64> {n} [off])
+ (Arg <typ.Float64> {n} [off+8]))
(Arg {n} [off]) && v.Type.IsComplex() && v.Type.Size() == 8 ->
(ComplexMake
- (Arg <types.Float32> {n} [off])
- (Arg <types.Float32> {n} [off+4]))
+ (Arg <typ.Float32> {n} [off])
+ (Arg <typ.Float32> {n} [off+4]))
(Arg <t>) && t.IsStruct() && t.NumFields() == 0 && fe.CanSSA(t) ->
(StructMake0)
// See ../magic.go for a detailed description of these algorithms.
// Unsigned divide by power of 2. Strength reduce to a shift.
-(Div8u n (Const8 [c])) && isPowerOfTwo(c&0xff) -> (Rsh8Ux64 n (Const64 <types.UInt64> [log2(c&0xff)]))
-(Div16u n (Const16 [c])) && isPowerOfTwo(c&0xffff) -> (Rsh16Ux64 n (Const64 <types.UInt64> [log2(c&0xffff)]))
-(Div32u n (Const32 [c])) && isPowerOfTwo(c&0xffffffff) -> (Rsh32Ux64 n (Const64 <types.UInt64> [log2(c&0xffffffff)]))
-(Div64u n (Const64 [c])) && isPowerOfTwo(c) -> (Rsh64Ux64 n (Const64 <types.UInt64> [log2(c)]))
+(Div8u n (Const8 [c])) && isPowerOfTwo(c&0xff) -> (Rsh8Ux64 n (Const64 <typ.UInt64> [log2(c&0xff)]))
+(Div16u n (Const16 [c])) && isPowerOfTwo(c&0xffff) -> (Rsh16Ux64 n (Const64 <typ.UInt64> [log2(c&0xffff)]))
+(Div32u n (Const32 [c])) && isPowerOfTwo(c&0xffffffff) -> (Rsh32Ux64 n (Const64 <typ.UInt64> [log2(c&0xffffffff)]))
+(Div64u n (Const64 [c])) && isPowerOfTwo(c) -> (Rsh64Ux64 n (Const64 <typ.UInt64> [log2(c)]))
// Unsigned divide, not a power of 2. Strength reduce to a multiply.
// For 8-bit divides, we just do a direct 9-bit by 8-bit multiply.
(Div8u x (Const8 [c])) && umagicOK(8, c) ->
(Trunc32to8
- (Rsh32Ux64 <types.UInt32>
- (Mul32 <types.UInt32>
- (Const32 <types.UInt32> [int64(1<<8+umagic(8,c).m)])
+ (Rsh32Ux64 <typ.UInt32>
+ (Mul32 <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(1<<8+umagic(8,c).m)])
(ZeroExt8to32 x))
- (Const64 <types.UInt64> [8+umagic(8,c).s])))
+ (Const64 <typ.UInt64> [8+umagic(8,c).s])))
// For 16-bit divides on 64-bit machines, we do a direct 17-bit by 16-bit multiply.
(Div16u x (Const16 [c])) && umagicOK(16, c) && config.RegSize == 8 ->
(Trunc64to16
- (Rsh64Ux64 <types.UInt64>
- (Mul64 <types.UInt64>
- (Const64 <types.UInt64> [int64(1<<16+umagic(16,c).m)])
+ (Rsh64Ux64 <typ.UInt64>
+ (Mul64 <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(1<<16+umagic(16,c).m)])
(ZeroExt16to64 x))
- (Const64 <types.UInt64> [16+umagic(16,c).s])))
+ (Const64 <typ.UInt64> [16+umagic(16,c).s])))
// For 16-bit divides on 32-bit machines
(Div16u x (Const16 [c])) && umagicOK(16, c) && config.RegSize == 4 && umagic(16,c).m&1 == 0 ->
(Trunc32to16
- (Rsh32Ux64 <types.UInt32>
- (Mul32 <types.UInt32>
- (Const32 <types.UInt32> [int64(1<<15+umagic(16,c).m/2)])
+ (Rsh32Ux64 <typ.UInt32>
+ (Mul32 <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(1<<15+umagic(16,c).m/2)])
(ZeroExt16to32 x))
- (Const64 <types.UInt64> [16+umagic(16,c).s-1])))
+ (Const64 <typ.UInt64> [16+umagic(16,c).s-1])))
(Div16u x (Const16 [c])) && umagicOK(16, c) && config.RegSize == 4 && c&1 == 0 ->
(Trunc32to16
- (Rsh32Ux64 <types.UInt32>
- (Mul32 <types.UInt32>
- (Const32 <types.UInt32> [int64(1<<15+(umagic(16,c).m+1)/2)])
- (Rsh32Ux64 <types.UInt32> (ZeroExt16to32 x) (Const64 <types.UInt64> [1])))
- (Const64 <types.UInt64> [16+umagic(16,c).s-2])))
+ (Rsh32Ux64 <typ.UInt32>
+ (Mul32 <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(1<<15+(umagic(16,c).m+1)/2)])
+ (Rsh32Ux64 <typ.UInt32> (ZeroExt16to32 x) (Const64 <typ.UInt64> [1])))
+ (Const64 <typ.UInt64> [16+umagic(16,c).s-2])))
(Div16u x (Const16 [c])) && umagicOK(16, c) && config.RegSize == 4 ->
(Trunc32to16
- (Rsh32Ux64 <types.UInt32>
+ (Rsh32Ux64 <typ.UInt32>
(Avg32u
- (Lsh32x64 <types.UInt32> (ZeroExt16to32 x) (Const64 <types.UInt64> [16]))
- (Mul32 <types.UInt32>
- (Const32 <types.UInt32> [int64(umagic(16,c).m)])
+ (Lsh32x64 <typ.UInt32> (ZeroExt16to32 x) (Const64 <typ.UInt64> [16]))
+ (Mul32 <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(umagic(16,c).m)])
(ZeroExt16to32 x)))
- (Const64 <types.UInt64> [16+umagic(16,c).s-1])))
+ (Const64 <typ.UInt64> [16+umagic(16,c).s-1])))
// For 32-bit divides on 32-bit machines
(Div32u x (Const32 [c])) && umagicOK(32, c) && config.RegSize == 4 && umagic(32,c).m&1 == 0 ->
- (Rsh32Ux64 <types.UInt32>
- (Hmul32u <types.UInt32>
- (Const32 <types.UInt32> [int64(int32(1<<31+umagic(32,c).m/2))])
+ (Rsh32Ux64 <typ.UInt32>
+ (Hmul32u <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(int32(1<<31+umagic(32,c).m/2))])
x)
- (Const64 <types.UInt64> [umagic(32,c).s-1]))
+ (Const64 <typ.UInt64> [umagic(32,c).s-1]))
(Div32u x (Const32 [c])) && umagicOK(32, c) && config.RegSize == 4 && c&1 == 0 ->
- (Rsh32Ux64 <types.UInt32>
- (Hmul32u <types.UInt32>
- (Const32 <types.UInt32> [int64(int32(1<<31+(umagic(32,c).m+1)/2))])
- (Rsh32Ux64 <types.UInt32> x (Const64 <types.UInt64> [1])))
- (Const64 <types.UInt64> [umagic(32,c).s-2]))
+ (Rsh32Ux64 <typ.UInt32>
+ (Hmul32u <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(int32(1<<31+(umagic(32,c).m+1)/2))])
+ (Rsh32Ux64 <typ.UInt32> x (Const64 <typ.UInt64> [1])))
+ (Const64 <typ.UInt64> [umagic(32,c).s-2]))
(Div32u x (Const32 [c])) && umagicOK(32, c) && config.RegSize == 4 ->
- (Rsh32Ux64 <types.UInt32>
+ (Rsh32Ux64 <typ.UInt32>
(Avg32u
x
- (Hmul32u <types.UInt32>
- (Const32 <types.UInt32> [int64(int32(umagic(32,c).m))])
+ (Hmul32u <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(int32(umagic(32,c).m))])
x))
- (Const64 <types.UInt64> [umagic(32,c).s-1]))
+ (Const64 <typ.UInt64> [umagic(32,c).s-1]))
// For 32-bit divides on 64-bit machines
// We'll use a regular (non-hi) multiply for this case.
(Div32u x (Const32 [c])) && umagicOK(32, c) && config.RegSize == 8 && umagic(32,c).m&1 == 0 ->
(Trunc64to32
- (Rsh64Ux64 <types.UInt64>
- (Mul64 <types.UInt64>
- (Const64 <types.UInt64> [int64(1<<31+umagic(32,c).m/2)])
+ (Rsh64Ux64 <typ.UInt64>
+ (Mul64 <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(1<<31+umagic(32,c).m/2)])
(ZeroExt32to64 x))
- (Const64 <types.UInt64> [32+umagic(32,c).s-1])))
+ (Const64 <typ.UInt64> [32+umagic(32,c).s-1])))
(Div32u x (Const32 [c])) && umagicOK(32, c) && config.RegSize == 8 && c&1 == 0 ->
(Trunc64to32
- (Rsh64Ux64 <types.UInt64>
- (Mul64 <types.UInt64>
- (Const64 <types.UInt64> [int64(1<<31+(umagic(32,c).m+1)/2)])
- (Rsh64Ux64 <types.UInt64> (ZeroExt32to64 x) (Const64 <types.UInt64> [1])))
- (Const64 <types.UInt64> [32+umagic(32,c).s-2])))
+ (Rsh64Ux64 <typ.UInt64>
+ (Mul64 <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(1<<31+(umagic(32,c).m+1)/2)])
+ (Rsh64Ux64 <typ.UInt64> (ZeroExt32to64 x) (Const64 <typ.UInt64> [1])))
+ (Const64 <typ.UInt64> [32+umagic(32,c).s-2])))
(Div32u x (Const32 [c])) && umagicOK(32, c) && config.RegSize == 8 ->
(Trunc64to32
- (Rsh64Ux64 <types.UInt64>
+ (Rsh64Ux64 <typ.UInt64>
(Avg64u
- (Lsh64x64 <types.UInt64> (ZeroExt32to64 x) (Const64 <types.UInt64> [32]))
- (Mul64 <types.UInt64>
- (Const64 <types.UInt32> [int64(umagic(32,c).m)])
+ (Lsh64x64 <typ.UInt64> (ZeroExt32to64 x) (Const64 <typ.UInt64> [32]))
+ (Mul64 <typ.UInt64>
+ (Const64 <typ.UInt32> [int64(umagic(32,c).m)])
(ZeroExt32to64 x)))
- (Const64 <types.UInt64> [32+umagic(32,c).s-1])))
+ (Const64 <typ.UInt64> [32+umagic(32,c).s-1])))
// For 64-bit divides on 64-bit machines
// (64-bit divides on 32-bit machines are lowered to a runtime call by the walk pass.)
(Div64u x (Const64 [c])) && umagicOK(64, c) && config.RegSize == 8 && umagic(64,c).m&1 == 0 ->
- (Rsh64Ux64 <types.UInt64>
- (Hmul64u <types.UInt64>
- (Const64 <types.UInt64> [int64(1<<63+umagic(64,c).m/2)])
+ (Rsh64Ux64 <typ.UInt64>
+ (Hmul64u <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(1<<63+umagic(64,c).m/2)])
x)
- (Const64 <types.UInt64> [umagic(64,c).s-1]))
+ (Const64 <typ.UInt64> [umagic(64,c).s-1]))
(Div64u x (Const64 [c])) && umagicOK(64, c) && config.RegSize == 8 && c&1 == 0 ->
- (Rsh64Ux64 <types.UInt64>
- (Hmul64u <types.UInt64>
- (Const64 <types.UInt64> [int64(1<<63+(umagic(64,c).m+1)/2)])
- (Rsh64Ux64 <types.UInt64> x (Const64 <types.UInt64> [1])))
- (Const64 <types.UInt64> [umagic(64,c).s-2]))
+ (Rsh64Ux64 <typ.UInt64>
+ (Hmul64u <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(1<<63+(umagic(64,c).m+1)/2)])
+ (Rsh64Ux64 <typ.UInt64> x (Const64 <typ.UInt64> [1])))
+ (Const64 <typ.UInt64> [umagic(64,c).s-2]))
(Div64u x (Const64 [c])) && umagicOK(64, c) && config.RegSize == 8 ->
- (Rsh64Ux64 <types.UInt64>
+ (Rsh64Ux64 <typ.UInt64>
(Avg64u
x
- (Hmul64u <types.UInt64>
- (Const64 <types.UInt64> [int64(umagic(64,c).m)])
+ (Hmul64u <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(umagic(64,c).m)])
x))
- (Const64 <types.UInt64> [umagic(64,c).s-1]))
+ (Const64 <typ.UInt64> [umagic(64,c).s-1]))
// Signed divide by a negative constant. Rewrite to divide by a positive constant.
(Div8 <t> n (Const8 [c])) && c < 0 && c != -1<<7 -> (Neg8 (Div8 <t> n (Const8 <t> [-c])))
// Dividing by the most-negative number. Result is always 0 except
// if the input is also the most-negative number.
// We can detect that using the sign bit of x & -x.
-(Div8 <t> x (Const8 [-1<<7 ])) -> (Rsh8Ux64 (And8 <t> x (Neg8 <t> x)) (Const64 <types.UInt64> [7 ]))
-(Div16 <t> x (Const16 [-1<<15])) -> (Rsh16Ux64 (And16 <t> x (Neg16 <t> x)) (Const64 <types.UInt64> [15]))
-(Div32 <t> x (Const32 [-1<<31])) -> (Rsh32Ux64 (And32 <t> x (Neg32 <t> x)) (Const64 <types.UInt64> [31]))
-(Div64 <t> x (Const64 [-1<<63])) -> (Rsh64Ux64 (And64 <t> x (Neg64 <t> x)) (Const64 <types.UInt64> [63]))
+(Div8 <t> x (Const8 [-1<<7 ])) -> (Rsh8Ux64 (And8 <t> x (Neg8 <t> x)) (Const64 <typ.UInt64> [7 ]))
+(Div16 <t> x (Const16 [-1<<15])) -> (Rsh16Ux64 (And16 <t> x (Neg16 <t> x)) (Const64 <typ.UInt64> [15]))
+(Div32 <t> x (Const32 [-1<<31])) -> (Rsh32Ux64 (And32 <t> x (Neg32 <t> x)) (Const64 <typ.UInt64> [31]))
+(Div64 <t> x (Const64 [-1<<63])) -> (Rsh64Ux64 (And64 <t> x (Neg64 <t> x)) (Const64 <typ.UInt64> [63]))
// Signed divide by power of 2.
// n / c = n >> log(c) if n >= 0
// We conditionally add c-1 by adding n>>63>>(64-log(c)) (first shift signed, second shift unsigned).
(Div8 <t> n (Const8 [c])) && isPowerOfTwo(c) ->
(Rsh8x64
- (Add8 <t> n (Rsh8Ux64 <t> (Rsh8x64 <t> n (Const64 <types.UInt64> [ 7])) (Const64 <types.UInt64> [ 8-log2(c)])))
- (Const64 <types.UInt64> [log2(c)]))
+ (Add8 <t> n (Rsh8Ux64 <t> (Rsh8x64 <t> n (Const64 <typ.UInt64> [ 7])) (Const64 <typ.UInt64> [ 8-log2(c)])))
+ (Const64 <typ.UInt64> [log2(c)]))
(Div16 <t> n (Const16 [c])) && isPowerOfTwo(c) ->
(Rsh16x64
- (Add16 <t> n (Rsh16Ux64 <t> (Rsh16x64 <t> n (Const64 <types.UInt64> [15])) (Const64 <types.UInt64> [16-log2(c)])))
- (Const64 <types.UInt64> [log2(c)]))
+ (Add16 <t> n (Rsh16Ux64 <t> (Rsh16x64 <t> n (Const64 <typ.UInt64> [15])) (Const64 <typ.UInt64> [16-log2(c)])))
+ (Const64 <typ.UInt64> [log2(c)]))
(Div32 <t> n (Const32 [c])) && isPowerOfTwo(c) ->
(Rsh32x64
- (Add32 <t> n (Rsh32Ux64 <t> (Rsh32x64 <t> n (Const64 <types.UInt64> [31])) (Const64 <types.UInt64> [32-log2(c)])))
- (Const64 <types.UInt64> [log2(c)]))
+ (Add32 <t> n (Rsh32Ux64 <t> (Rsh32x64 <t> n (Const64 <typ.UInt64> [31])) (Const64 <typ.UInt64> [32-log2(c)])))
+ (Const64 <typ.UInt64> [log2(c)]))
(Div64 <t> n (Const64 [c])) && isPowerOfTwo(c) ->
(Rsh64x64
- (Add64 <t> n (Rsh64Ux64 <t> (Rsh64x64 <t> n (Const64 <types.UInt64> [63])) (Const64 <types.UInt64> [64-log2(c)])))
- (Const64 <types.UInt64> [log2(c)]))
+ (Add64 <t> n (Rsh64Ux64 <t> (Rsh64x64 <t> n (Const64 <typ.UInt64> [63])) (Const64 <typ.UInt64> [64-log2(c)])))
+ (Const64 <typ.UInt64> [log2(c)]))
// Signed divide, not a power of 2. Strength reduce to a multiply.
(Div8 <t> x (Const8 [c])) && smagicOK(8,c) ->
(Sub8 <t>
(Rsh32x64 <t>
- (Mul32 <types.UInt32>
- (Const32 <types.UInt32> [int64(smagic(8,c).m)])
+ (Mul32 <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(smagic(8,c).m)])
(SignExt8to32 x))
- (Const64 <types.UInt64> [8+smagic(8,c).s]))
+ (Const64 <typ.UInt64> [8+smagic(8,c).s]))
(Rsh32x64 <t>
(SignExt8to32 x)
- (Const64 <types.UInt64> [31])))
+ (Const64 <typ.UInt64> [31])))
(Div16 <t> x (Const16 [c])) && smagicOK(16,c) ->
(Sub16 <t>
(Rsh32x64 <t>
- (Mul32 <types.UInt32>
- (Const32 <types.UInt32> [int64(smagic(16,c).m)])
+ (Mul32 <typ.UInt32>
+ (Const32 <typ.UInt32> [int64(smagic(16,c).m)])
(SignExt16to32 x))
- (Const64 <types.UInt64> [16+smagic(16,c).s]))
+ (Const64 <typ.UInt64> [16+smagic(16,c).s]))
(Rsh32x64 <t>
(SignExt16to32 x)
- (Const64 <types.UInt64> [31])))
+ (Const64 <typ.UInt64> [31])))
(Div32 <t> x (Const32 [c])) && smagicOK(32,c) && config.RegSize == 8 ->
(Sub32 <t>
(Rsh64x64 <t>
- (Mul64 <types.UInt64>
- (Const64 <types.UInt64> [int64(smagic(32,c).m)])
+ (Mul64 <typ.UInt64>
+ (Const64 <typ.UInt64> [int64(smagic(32,c).m)])
(SignExt32to64 x))
- (Const64 <types.UInt64> [32+smagic(32,c).s]))
+ (Const64 <typ.UInt64> [32+smagic(32,c).s]))
(Rsh64x64 <t>
(SignExt32to64 x)
- (Const64 <types.UInt64> [63])))
+ (Const64 <typ.UInt64> [63])))
(Div32 <t> x (Const32 [c])) && smagicOK(32,c) && config.RegSize == 4 && smagic(32,c).m&1 == 0 ->
(Sub32 <t>
(Rsh32x64 <t>
(Hmul32 <t>
- (Const32 <types.UInt32> [int64(int32(smagic(32,c).m/2))])
+ (Const32 <typ.UInt32> [int64(int32(smagic(32,c).m/2))])
x)
- (Const64 <types.UInt64> [smagic(32,c).s-1]))
+ (Const64 <typ.UInt64> [smagic(32,c).s-1]))
(Rsh32x64 <t>
x
- (Const64 <types.UInt64> [31])))
+ (Const64 <typ.UInt64> [31])))
(Div32 <t> x (Const32 [c])) && smagicOK(32,c) && config.RegSize == 4 && smagic(32,c).m&1 != 0 ->
(Sub32 <t>
(Rsh32x64 <t>
(Add32 <t>
(Hmul32 <t>
- (Const32 <types.UInt32> [int64(int32(smagic(32,c).m))])
+ (Const32 <typ.UInt32> [int64(int32(smagic(32,c).m))])
x)
x)
- (Const64 <types.UInt64> [smagic(32,c).s]))
+ (Const64 <typ.UInt64> [smagic(32,c).s]))
(Rsh32x64 <t>
x
- (Const64 <types.UInt64> [31])))
+ (Const64 <typ.UInt64> [31])))
(Div64 <t> x (Const64 [c])) && smagicOK(64,c) && smagic(64,c).m&1 == 0 ->
(Sub64 <t>
(Rsh64x64 <t>
(Hmul64 <t>
- (Const64 <types.UInt64> [int64(smagic(64,c).m/2)])
+ (Const64 <typ.UInt64> [int64(smagic(64,c).m/2)])
x)
- (Const64 <types.UInt64> [smagic(64,c).s-1]))
+ (Const64 <typ.UInt64> [smagic(64,c).s-1]))
(Rsh64x64 <t>
x
- (Const64 <types.UInt64> [63])))
+ (Const64 <typ.UInt64> [63])))
(Div64 <t> x (Const64 [c])) && smagicOK(64,c) && smagic(64,c).m&1 != 0 ->
(Sub64 <t>
(Rsh64x64 <t>
(Add64 <t>
(Hmul64 <t>
- (Const64 <types.UInt64> [int64(smagic(64,c).m)])
+ (Const64 <typ.UInt64> [int64(smagic(64,c).m)])
x)
x)
- (Const64 <types.UInt64> [smagic(64,c).s]))
+ (Const64 <typ.UInt64> [smagic(64,c).s]))
(Rsh64x64 <t>
x
- (Const64 <types.UInt64> [63])))
+ (Const64 <typ.UInt64> [63])))
// Unsigned mod by power of 2 constant.
(Mod8u <t> n (Const8 [c])) && isPowerOfTwo(c&0xff) -> (And8 n (Const8 <t> [(c&0xff)-1]))
fmt.Fprintln(w, "import \"math\"")
fmt.Fprintln(w, "import \"cmd/internal/obj\"")
fmt.Fprintln(w, "import \"cmd/internal/objabi\"")
+ fmt.Fprintln(w, "import \"cmd/compile/internal/types\"")
fmt.Fprintln(w, "var _ = math.MinInt8 // in case not otherwise used")
fmt.Fprintln(w, "var _ = obj.ANOP // in case not otherwise used")
fmt.Fprintln(w, "var _ = objabi.GOROOT // in case not otherwise used")
+ fmt.Fprintln(w, "var _ = types.TypeMem // in case not otherwise used")
fmt.Fprintln(w)
const chunkSize = 10
hasb := strings.Contains(body, "b.")
hasconfig := strings.Contains(body, "config.") || strings.Contains(body, "config)")
hasfe := strings.Contains(body, "fe.")
- hasts := strings.Contains(body, "types.")
+ hastyps := strings.Contains(body, "typ.")
fmt.Fprintf(w, "func rewriteValue%s_%s_%d(v *Value) bool {\n", arch.name, op, chunk)
- if hasb || hasconfig || hasfe {
+ if hasb || hasconfig || hasfe || hastyps {
fmt.Fprintln(w, "b := v.Block")
fmt.Fprintln(w, "_ = b")
}
fmt.Fprintln(w, "fe := b.Func.fe")
fmt.Fprintln(w, "_ = fe")
}
- if hasts {
- fmt.Fprintln(w, "types := &b.Func.Config.Types")
- fmt.Fprintln(w, "_ = types")
+ if hastyps {
+ fmt.Fprintln(w, "typ := &b.Func.Config.Types")
+ fmt.Fprintln(w, "_ = typ")
}
fmt.Fprint(w, body)
fmt.Fprintf(w, "}\n")
fmt.Fprintln(w, "_ = config")
fmt.Fprintln(w, "fe := b.Func.fe")
fmt.Fprintln(w, "_ = fe")
- fmt.Fprintln(w, "types := &config.Types")
- fmt.Fprintln(w, "_ = types")
+ fmt.Fprintln(w, "typ := &config.Types")
+ fmt.Fprintln(w, "_ = typ")
fmt.Fprintf(w, "switch b.Kind {\n")
ops = nil
for op := range blockrules {
if len(ts) != 2 {
panic("Tuple expect 2 arguments")
}
- return "MakeTuple(" + typeName(ts[0]) + ", " + typeName(ts[1]) + ")"
+ return "types.NewTuple(" + typeName(ts[0]) + ", " + typeName(ts[1]) + ")"
}
switch typ {
case "Flags", "Mem", "Void", "Int128":
- return "Type" + typ
+ return "types.Type" + typ
default:
- return "types." + typ
+ return "typ." + typ
}
}
package ssa
-import "fmt"
+import (
+ "cmd/compile/internal/types"
+ "fmt"
+)
// A place that an ssa variable can reside.
type Location interface {
// A LocalSlot is a location in the stack frame.
// It is (possibly a subpiece of) a PPARAM, PPARAMOUT, or PAUTO ONAME node.
type LocalSlot struct {
- N GCNode // an ONAME *gc.Node representing a variable on the stack
- Type Type // type of slot
- Off int64 // offset of slot in N
+ N GCNode // an ONAME *gc.Node representing a variable on the stack
+ Type *types.Type // type of slot
+ Off int64 // offset of slot in N
}
func (s LocalSlot) Name() string {
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/src"
"testing"
)
c := testConfigS390X(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("SP", OpSP, TypeUInt64, 0, nil),
- Valu("ret", OpAddr, TypeInt64Ptr, 0, nil, "SP"),
- Valu("N", OpArg, TypeInt64, 0, c.Frontend().Auto(src.NoXPos, TypeInt64)),
- Valu("starti", OpConst64, TypeInt64, 0, nil),
- Valu("startsum", OpConst64, TypeInt64, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("SP", OpSP, c.config.Types.UInt64, 0, nil),
+ Valu("ret", OpAddr, c.config.Types.Int64.PtrTo(), 0, nil, "SP"),
+ Valu("N", OpArg, c.config.Types.Int64, 0, c.Frontend().Auto(src.NoXPos, c.config.Types.Int64)),
+ Valu("starti", OpConst64, c.config.Types.Int64, 0, nil),
+ Valu("startsum", OpConst64, c.config.Types.Int64, 0, nil),
Goto("b1")),
Bloc("b1",
- Valu("phii", OpPhi, TypeInt64, 0, nil, "starti", "i"),
- Valu("phisum", OpPhi, TypeInt64, 0, nil, "startsum", "sum"),
- Valu("cmp1", OpLess64, TypeBool, 0, nil, "phii", "N"),
+ Valu("phii", OpPhi, c.config.Types.Int64, 0, nil, "starti", "i"),
+ Valu("phisum", OpPhi, c.config.Types.Int64, 0, nil, "startsum", "sum"),
+ Valu("cmp1", OpLess64, c.config.Types.Bool, 0, nil, "phii", "N"),
If("cmp1", "b2", "b3")),
Bloc("b2",
- Valu("c1", OpConst64, TypeInt64, 1, nil),
- Valu("i", OpAdd64, TypeInt64, 0, nil, "phii", "c1"),
- Valu("c3", OpConst64, TypeInt64, 3, nil),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "phisum", "c3"),
+ Valu("c1", OpConst64, c.config.Types.Int64, 1, nil),
+ Valu("i", OpAdd64, c.config.Types.Int64, 0, nil, "phii", "c1"),
+ Valu("c3", OpConst64, c.config.Types.Int64, 3, nil),
+ Valu("sum", OpAdd64, c.config.Types.Int64, 0, nil, "phisum", "c3"),
Goto("b1")),
Bloc("b3",
- Valu("retdef", OpVarDef, TypeMem, 0, nil, "mem"),
- Valu("store", OpStore, TypeMem, 0, TypeInt64, "ret", "phisum", "retdef"),
+ Valu("retdef", OpVarDef, types.TypeMem, 0, nil, "mem"),
+ Valu("store", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ret", "phisum", "retdef"),
Exit("store")))
CheckFunc(fun.f)
Compile(fun.f)
package ssa
-import "fmt"
+import (
+ "cmd/compile/internal/types"
+ "fmt"
+)
// an edgeMem records a backedge, together with the memory
// phi functions at the target of the backedge that must
// It's possible that there is no memory state (no global/pointer loads/stores or calls)
if lastMems[f.Entry.ID] == nil {
- lastMems[f.Entry.ID] = f.Entry.NewValue0(f.Entry.Pos, OpInitMem, TypeMem)
+ lastMems[f.Entry.ID] = f.Entry.NewValue0(f.Entry.Pos, OpInitMem, types.TypeMem)
}
memDefsAtBlockEnds := make([]*Value, f.NumBlocks()) // For each block, the mem def seen at its bottom. Could be from earlier block.
// if sp < g.limit { goto sched }
// goto header
- types := &f.Config.Types
- pt := types.Uintptr
+ cfgtypes := &f.Config.Types
+ pt := cfgtypes.Uintptr
g := test.NewValue1(bb.Pos, OpGetG, pt, mem0)
sp := test.NewValue0(bb.Pos, OpSP, pt)
cmpOp := OpLess64U
}
limaddr := test.NewValue1I(bb.Pos, OpOffPtr, pt, 2*pt.Size(), g)
lim := test.NewValue2(bb.Pos, OpLoad, pt, limaddr, mem0)
- cmp := test.NewValue2(bb.Pos, cmpOp, types.Bool, sp, lim)
+ cmp := test.NewValue2(bb.Pos, cmpOp, cfgtypes.Bool, sp, lim)
test.SetControl(cmp)
// if true, goto sched
// mem1 := call resched (mem0)
// goto header
resched := f.fe.Syslook("goschedguarded")
- mem1 := sched.NewValue1A(bb.Pos, OpStaticCall, TypeMem, resched, mem0)
+ mem1 := sched.NewValue1A(bb.Pos, OpStaticCall, types.TypeMem, resched, mem0)
sched.AddEdgeTo(h)
headerMemPhi.AddArg(mem1)
package ssa
import (
+ "cmd/compile/internal/types"
"strconv"
"testing"
)
// nil checks, none of which can be eliminated.
// Run with multiple depths to observe big-O behavior.
func benchmarkNilCheckDeep(b *testing.B, depth int) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ c := testConfig(b)
+ ptrType := c.config.Types.BytePtr
var blocs []bloc
blocs = append(blocs,
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto(blockn(0)),
),
)
blocs = append(blocs,
Bloc(blockn(i),
Valu(ptrn(i), OpAddr, ptrType, 0, nil, "sb"),
- Valu(booln(i), OpIsNonNil, TypeBool, 0, nil, ptrn(i)),
+ Valu(booln(i), OpIsNonNil, c.config.Types.Bool, 0, nil, ptrn(i)),
If(booln(i), blockn(i+1), "exit"),
),
)
Bloc("exit", Exit("mem")),
)
- c := testConfig(b)
fun := c.Fun("entry", blocs...)
CheckFunc(fun.f)
// TestNilcheckSimple verifies that a second repeated nilcheck is removed.
func TestNilcheckSimple(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
- Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool1", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool1", "secondCheck", "exit")),
Bloc("secondCheck",
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool2", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckDomOrder ensures that the nil check elimination isn't dependent
// on the order of the dominees.
func TestNilcheckDomOrder(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
- Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool1", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool1", "secondCheck", "exit")),
Bloc("exit",
Exit("mem")),
Bloc("secondCheck",
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool2", "extra", "exit")),
Bloc("extra",
Goto("exit")))
// TestNilcheckAddr verifies that nilchecks of OpAddr constructed values are removed.
func TestNilcheckAddr(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpAddr, ptrType, 0, nil, "sb"),
- Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool1", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool1", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckAddPtr verifies that nilchecks of OpAddPtr constructed values are removed.
func TestNilcheckAddPtr(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
- Valu("off", OpConst64, TypeInt64, 20, nil),
+ Valu("off", OpConst64, c.config.Types.Int64, 20, nil),
Valu("ptr1", OpAddPtr, ptrType, 0, nil, "sb", "off"),
- Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool1", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool1", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckPhi tests that nil checks of phis, for which all values are known to be
// non-nil are removed.
func TestNilcheckPhi(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("sp", OpSP, TypeInvalid, 0, nil),
- Valu("baddr", OpAddr, TypeBool, 0, "b", "sp"),
- Valu("bool1", OpLoad, TypeBool, 0, nil, "baddr", "mem"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("sp", OpSP, types.TypeInvalid, 0, nil),
+ Valu("baddr", OpAddr, c.config.Types.Bool, 0, "b", "sp"),
+ Valu("bool1", OpLoad, c.config.Types.Bool, 0, nil, "baddr", "mem"),
If("bool1", "b1", "b2")),
Bloc("b1",
Valu("ptr1", OpAddr, ptrType, 0, nil, "sb"),
// both ptr1 and ptr2 are guaranteed non-nil here
Bloc("checkPtr",
Valu("phi", OpPhi, ptrType, 0, nil, "ptr1", "ptr2"),
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "phi"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "phi"),
If("bool2", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckKeepRemove verifies that duplicate checks of the same pointer
// are removed, but checks of different pointers are not.
func TestNilcheckKeepRemove(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
- Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool1", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool1", "differentCheck", "exit")),
Bloc("differentCheck",
Valu("ptr2", OpLoad, ptrType, 0, nil, "sb", "mem"),
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr2"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr2"),
If("bool2", "secondCheck", "exit")),
Bloc("secondCheck",
- Valu("bool3", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool3", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool3", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckInFalseBranch tests that nil checks in the false branch of an nilcheck
// block are *not* removed.
func TestNilcheckInFalseBranch(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
- Valu("bool1", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool1", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool1", "extra", "secondCheck")),
Bloc("secondCheck",
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool2", "extra", "thirdCheck")),
Bloc("thirdCheck",
- Valu("bool3", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool3", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool3", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckUser verifies that a user nil check that dominates a generated nil check
// wil remove the generated nil check.
func TestNilcheckUser(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
- Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "secondCheck", "exit")),
Bloc("secondCheck",
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool2", "extra", "exit")),
Bloc("extra",
Goto("exit")),
// TestNilcheckBug reproduces a bug in nilcheckelim found by compiling math/big
func TestNilcheckBug(t *testing.T) {
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
c := testConfig(t)
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto("checkPtr")),
Bloc("checkPtr",
Valu("ptr1", OpLoad, ptrType, 0, nil, "sb", "mem"),
Valu("nilptr", OpConstNil, ptrType, 0, nil),
- Valu("bool1", OpNeqPtr, TypeBool, 0, nil, "ptr1", "nilptr"),
+ Valu("bool1", OpNeqPtr, c.config.Types.Bool, 0, nil, "ptr1", "nilptr"),
If("bool1", "secondCheck", "couldBeNil")),
Bloc("couldBeNil",
Goto("secondCheck")),
Bloc("secondCheck",
- Valu("bool2", OpIsNonNil, TypeBool, 0, nil, "ptr1"),
+ Valu("bool2", OpIsNonNil, c.config.Types.Bool, 0, nil, "ptr1"),
If("bool2", "extra", "exit")),
Bloc("extra",
// prevent fuse from eliminating this block
- Valu("store", OpStore, TypeMem, 0, ptrType, "ptr1", "nilptr", "mem"),
+ Valu("store", OpStore, types.TypeMem, 0, ptrType, "ptr1", "nilptr", "mem"),
Goto("exit")),
Bloc("exit",
- Valu("phi", OpPhi, TypeMem, 0, nil, "mem", "store"),
+ Valu("phi", OpPhi, types.TypeMem, 0, nil, "mem", "store"),
Exit("phi")))
CheckFunc(fun.f)
package ssa
import (
+ "cmd/compile/internal/types"
"fmt"
"testing"
)
func genFunction(size int) []bloc {
var blocs []bloc
- elemType := &TypeImpl{Size_: 8, Name: "testtype"}
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr", Elem_: elemType} // dummy for testing
+ elemType := types.Types[types.TINT64]
+ ptrType := elemType.PtrTo()
valn := func(s string, m, n int) string { return fmt.Sprintf("%s%d-%d", s, m, n) }
blocs = append(blocs,
Bloc("entry",
- Valu(valn("store", 0, 4), OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
+ Valu(valn("store", 0, 4), OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
Goto(blockn(1)),
),
)
for i := 1; i < size+1; i++ {
blocs = append(blocs, Bloc(blockn(i),
- Valu(valn("v", i, 0), OpConstBool, TypeBool, 1, nil),
+ Valu(valn("v", i, 0), OpConstBool, types.Types[types.TBOOL], 1, nil),
Valu(valn("addr", i, 1), OpAddr, ptrType, 0, nil, "sb"),
Valu(valn("addr", i, 2), OpAddr, ptrType, 0, nil, "sb"),
Valu(valn("addr", i, 3), OpAddr, ptrType, 0, nil, "sb"),
- Valu(valn("zero", i, 1), OpZero, TypeMem, 8, elemType, valn("addr", i, 3),
+ Valu(valn("zero", i, 1), OpZero, types.TypeMem, 8, elemType, valn("addr", i, 3),
valn("store", i-1, 4)),
- Valu(valn("store", i, 1), OpStore, TypeMem, 0, elemType, valn("addr", i, 1),
+ Valu(valn("store", i, 1), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 1),
valn("v", i, 0), valn("zero", i, 1)),
- Valu(valn("store", i, 2), OpStore, TypeMem, 0, elemType, valn("addr", i, 2),
+ Valu(valn("store", i, 2), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 2),
valn("v", i, 0), valn("store", i, 1)),
- Valu(valn("store", i, 3), OpStore, TypeMem, 0, elemType, valn("addr", i, 1),
+ Valu(valn("store", i, 3), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 1),
valn("v", i, 0), valn("store", i, 2)),
- Valu(valn("store", i, 4), OpStore, TypeMem, 0, elemType, valn("addr", i, 3),
+ Valu(valn("store", i, 4), OpStore, types.TypeMem, 0, elemType, valn("addr", i, 3),
valn("v", i, 0), valn("store", i, 3)),
Goto(blockn(i+1))))
}
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/objabi"
"cmd/internal/src"
"fmt"
}
// compatRegs returns the set of registers which can store a type t.
-func (s *regAllocState) compatRegs(t Type) regMask {
+func (s *regAllocState) compatRegs(t *types.Type) regMask {
var m regMask
if t.IsTuple() || t.IsFlags() {
return 0
}
- if t.IsFloat() || t == TypeInt128 {
+ if t.IsFloat() || t == types.TypeInt128 {
m = s.f.Config.fpRegMask
} else {
m = s.f.Config.gpRegMask
}
// findRegFor finds a register we can use to make a temp copy of type typ.
-func (e *edgeState) findRegFor(typ Type) Location {
+func (e *edgeState) findRegFor(typ *types.Type) Location {
// Which registers are possibilities.
var m regMask
types := &e.s.f.Config.Types
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/src"
"testing"
)
c := testConfig(t)
f := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("x", OpAMD64MOVLconst, TypeInt8, 1, nil),
- Valu("y", OpAMD64MOVLconst, TypeInt8, 2, nil),
- Valu("a", OpAMD64TESTB, TypeFlags, 0, nil, "x", "y"),
- Valu("b", OpAMD64TESTB, TypeFlags, 0, nil, "y", "x"),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("x", OpAMD64MOVLconst, c.config.Types.Int8, 1, nil),
+ Valu("y", OpAMD64MOVLconst, c.config.Types.Int8, 2, nil),
+ Valu("a", OpAMD64TESTB, types.TypeFlags, 0, nil, "x", "y"),
+ Valu("b", OpAMD64TESTB, types.TypeFlags, 0, nil, "y", "x"),
Eq("a", "if", "exit"),
),
Bloc("if",
c := testConfig(t)
f := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("ptr", OpArg, TypeInt64Ptr, 0, c.Frontend().Auto(src.NoXPos, TypeInt64)),
- Valu("cond", OpArg, TypeBool, 0, c.Frontend().Auto(src.NoXPos, TypeBool)),
- Valu("ld", OpAMD64MOVQload, TypeInt64, 0, nil, "ptr", "mem"), // this value needs a spill
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("ptr", OpArg, c.config.Types.Int64.PtrTo(), 0, c.Frontend().Auto(src.NoXPos, c.config.Types.Int64)),
+ Valu("cond", OpArg, c.config.Types.Bool, 0, c.Frontend().Auto(src.NoXPos, c.config.Types.Bool)),
+ Valu("ld", OpAMD64MOVQload, c.config.Types.Int64, 0, nil, "ptr", "mem"), // this value needs a spill
Goto("loop"),
),
Bloc("loop",
- Valu("memphi", OpPhi, TypeMem, 0, nil, "mem", "call"),
- Valu("call", OpAMD64CALLstatic, TypeMem, 0, nil, "memphi"),
- Valu("test", OpAMD64CMPBconst, TypeFlags, 0, nil, "cond"),
+ Valu("memphi", OpPhi, types.TypeMem, 0, nil, "mem", "call"),
+ Valu("call", OpAMD64CALLstatic, types.TypeMem, 0, nil, "memphi"),
+ Valu("test", OpAMD64CMPBconst, types.TypeFlags, 0, nil, "cond"),
Eq("test", "next", "exit"),
),
Bloc("next",
Goto("loop"),
),
Bloc("exit",
- Valu("store", OpAMD64MOVQstore, TypeMem, 0, nil, "ptr", "ld", "call"),
+ Valu("store", OpAMD64MOVQstore, types.TypeMem, 0, nil, "ptr", "ld", "call"),
Exit("store"),
),
)
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"fmt"
"io"
// Common functions called from rewriting rules
-func is64BitFloat(t Type) bool {
+func is64BitFloat(t *types.Type) bool {
return t.Size() == 8 && t.IsFloat()
}
-func is32BitFloat(t Type) bool {
+func is32BitFloat(t *types.Type) bool {
return t.Size() == 4 && t.IsFloat()
}
-func is64BitInt(t Type) bool {
+func is64BitInt(t *types.Type) bool {
return t.Size() == 8 && t.IsInteger()
}
-func is32BitInt(t Type) bool {
+func is32BitInt(t *types.Type) bool {
return t.Size() == 4 && t.IsInteger()
}
-func is16BitInt(t Type) bool {
+func is16BitInt(t *types.Type) bool {
return t.Size() == 2 && t.IsInteger()
}
-func is8BitInt(t Type) bool {
+func is8BitInt(t *types.Type) bool {
return t.Size() == 1 && t.IsInteger()
}
-func isPtr(t Type) bool {
+func isPtr(t *types.Type) bool {
return t.IsPtrShaped()
}
-func isSigned(t Type) bool {
+func isSigned(t *types.Type) bool {
return t.IsSigned()
}
-func typeSize(t Type) int64 {
+func typeSize(t *types.Type) int64 {
return t.Size()
}
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValue386(v *Value) bool {
switch v.Op {
c := v_0.AuxInt
x := v.Args[1]
v.reset(Op386InvertFlags)
- v0 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v0.AuxInt = int64(int8(c))
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(Op386InvertFlags)
- v0 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(Op386InvertFlags)
- v0 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v0.AuxInt = int64(int16(c))
v0.AddArg(x)
v.AddArg(v0)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVSDconst [c])
// cond: config.ctxt.Flag_shared
// result: (MOVSDconst2 (MOVSDconst1 [c]))
break
}
v.reset(Op386MOVSDconst2)
- v0 := b.NewValue0(v.Pos, Op386MOVSDconst1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVSDconst1, typ.UInt32)
v0.AuxInt = c
v.AddArg(v0)
return true
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVSSconst [c])
// cond: config.ctxt.Flag_shared
// result: (MOVSSconst2 (MOVSSconst1 [c]))
break
}
v.reset(Op386MOVSSconst2)
- v0 := b.NewValue0(v.Pos, Op386MOVSSconst1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVSSconst1, typ.UInt32)
v0.AuxInt = c
v.AddArg(v0)
return true
func rewriteValue386_Op386ORL_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL x (MOVLconst [c]))
// cond:
// result: (ORLconst [c] x)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, Op386MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, Op386MOVWload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValue386_Op386ORL_10(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL s0:(SHLLconst [8] x1:(MOVBload [i1] {s} p mem)) x0:(MOVBload [i0] {s} p mem))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0)
// result: @mergePoint(b,x0,x1) (MOVWload [i0] {s} p mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, Op386MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, Op386MOVWload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1, x2)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1, x2)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1, x2)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1, x2)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValue386_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (DIVW (SignExt8to16 x) (SignExt8to16 y))
x := v.Args[0]
y := v.Args[1]
v.reset(Op386DIVW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValue386_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y))
x := v.Args[0]
y := v.Args[1]
v.reset(Op386DIVWU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v1.AddArg(y)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQ)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQ)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQ)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQ)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETEQ)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGE)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETAE)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGEF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETAE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGEF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGE)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETAE)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETG)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETA)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETG)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETA)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETG)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETA)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(Op386SETB)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
for {
p := v.Args[0]
v.reset(Op386SETNE)
- v0 := b.NewValue0(v.Pos, Op386TESTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386TESTL, types.TypeFlags)
v0.AddArg(p)
v0.AddArg(p)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(Op386SETBE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETLE)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETBE)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETLE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGEF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISS, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETBE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGEF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETLE)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETBE)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETL)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETB)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETL)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISS, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETB)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETGF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETL)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETB)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValue386_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (MODW (SignExt8to16 x) (SignExt8to16 y))
x := v.Args[0]
y := v.Args[1]
v.reset(Op386MODW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValue386_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (MODWU (ZeroExt8to16 x) (ZeroExt8to16 y))
x := v.Args[0]
y := v.Args[1]
v.reset(Op386MODWU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValue386_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(Op386MOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVBload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, Op386MOVBload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(Op386MOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, Op386MOVWload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(Op386MOVLstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(Op386MOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVBload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, Op386MOVBload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, Op386MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, Op386MOVWload, typ.UInt16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(Op386MOVBstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVBload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, Op386MOVBload, typ.UInt8)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(Op386MOVWstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, Op386MOVWload, typ.UInt16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(Op386MOVLstore)
v.AuxInt = 3
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v0.AuxInt = 3
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(Op386MOVLstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v1.AuxInt = s % 4
v1.AddArg(src)
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, Op386MOVLstore, TypeMem)
+ v2 := b.NewValue0(v.Pos, Op386MOVLstore, types.TypeMem)
v2.AddArg(dst)
- v3 := b.NewValue0(v.Pos, Op386MOVLload, types.UInt32)
+ v3 := b.NewValue0(v.Pos, Op386MOVLload, typ.UInt32)
v3.AddArg(src)
v3.AddArg(mem)
v2.AddArg(v3)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [s] dst src mem)
// cond: s > 8 && s <= 4*128 && s%4 == 0 && !config.noDuffDevice
// result: (DUFFCOPY [10*(128-s/4)] dst src mem)
v.reset(Op386REPMOVSL)
v.AddArg(dst)
v.AddArg(src)
- v0 := b.NewValue0(v.Pos, Op386MOVLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLconst, typ.UInt32)
v0.AuxInt = s / 4
v.AddArg(v0)
v.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neg32F x)
// cond: !config.use387
- // result: (PXOR x (MOVSSconst <types.Float32> [f2i(math.Copysign(0, -1))]))
+ // result: (PXOR x (MOVSSconst <typ.Float32> [f2i(math.Copysign(0, -1))]))
for {
x := v.Args[0]
if !(!config.use387) {
}
v.reset(Op386PXOR)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, Op386MOVSSconst, types.Float32)
+ v0 := b.NewValue0(v.Pos, Op386MOVSSconst, typ.Float32)
v0.AuxInt = f2i(math.Copysign(0, -1))
v.AddArg(v0)
return true
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neg64F x)
// cond: !config.use387
- // result: (PXOR x (MOVSDconst <types.Float64> [f2i(math.Copysign(0, -1))]))
+ // result: (PXOR x (MOVSDconst <typ.Float64> [f2i(math.Copysign(0, -1))]))
for {
x := v.Args[0]
if !(!config.use387) {
}
v.reset(Op386PXOR)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, Op386MOVSDconst, types.Float64)
+ v0 := b.NewValue0(v.Pos, Op386MOVSDconst, typ.Float64)
v0.AuxInt = f2i(math.Copysign(0, -1))
v.AddArg(v0)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNE)
- v0 := b.NewValue0(v.Pos, Op386CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNEF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNEF)
- v0 := b.NewValue0(v.Pos, Op386UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNE)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNE)
- v0 := b.NewValue0(v.Pos, Op386CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(Op386SETNE)
- v0 := b.NewValue0(v.Pos, Op386CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPWconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, Op386NOTL, y.Type)
v2 := b.NewValue0(v.Pos, Op386SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, Op386CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, Op386CMPBconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
}
func rewriteValue386_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (MOVSDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(Op386MOVSDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (MOVSSstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(Op386MOVSSstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4
+ // cond: t.(*types.Type).Size() == 4
// result: (MOVLstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4) {
+ if !(t.(*types.Type).Size() == 4) {
break
}
v.reset(Op386MOVLstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(Op386MOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(Op386MOVBstore)
func rewriteValue386_OpZero_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [0] _ mem)
// cond:
// result: mem
v.reset(Op386MOVBstoreconst)
v.AuxInt = makeValAndOff(0, 2)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVWstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVWstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(Op386MOVBstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(Op386MOVWstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(Op386MOVLstoreconst)
v.AuxInt = makeValAndOff(0, 3)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
}
v.reset(OpZero)
v.AuxInt = s - s%4
- v0 := b.NewValue0(v.Pos, Op386ADDLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386ADDLconst, typ.UInt32)
v0.AuxInt = s % 4
v0.AddArg(destptr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(destptr)
v1.AddArg(mem)
v.reset(Op386MOVLstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [12] destptr mem)
// cond:
// result: (MOVLstoreconst [makeValAndOff(0,8)] destptr (MOVLstoreconst [makeValAndOff(0,4)] destptr (MOVLstoreconst [0] destptr mem)))
v.reset(Op386MOVLstoreconst)
v.AuxInt = makeValAndOff(0, 8)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v0.AuxInt = makeValAndOff(0, 4)
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(destptr)
v1.AddArg(mem)
v.reset(Op386MOVLstoreconst)
v.AuxInt = makeValAndOff(0, 12)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v0.AuxInt = makeValAndOff(0, 8)
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v1 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v1.AuxInt = makeValAndOff(0, 4)
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, Op386MOVLstoreconst, TypeMem)
+ v2 := b.NewValue0(v.Pos, Op386MOVLstoreconst, types.TypeMem)
v2.AuxInt = 0
v2.AddArg(destptr)
v2.AddArg(mem)
v.reset(Op386DUFFZERO)
v.AuxInt = 1 * (128 - s/4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
}
v.reset(Op386REPSTOSL)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, Op386MOVLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, Op386MOVLconst, typ.UInt32)
v0.AuxInt = s / 4
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, Op386MOVLconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, Op386MOVLconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
v.AddArg(mem)
v.reset(Op386XORLconst)
v.AuxInt = -1
v0 := b.NewValue0(v.Pos, Op386SBBLcarrymask, t)
- v1 := b.NewValue0(v.Pos, Op386CMPLconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, Op386CMPLconst, types.TypeFlags)
v1.AuxInt = 1
v1.AddArg(x)
v0.AddArg(v1)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case Block386EQ:
// match: (EQ (InvertFlags cmp) yes no)
_ = v
cond := b.Control
b.Kind = Block386NE
- v0 := b.NewValue0(v.Pos, Op386TESTB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, Op386TESTB, types.TypeFlags)
v0.AddArg(cond)
v0.AddArg(cond)
b.SetControl(v0)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValueAMD64(v *Value) bool {
switch v.Op {
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpAMD64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v0.AuxInt = int64(int8(c))
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpAMD64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpAMD64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v0.AuxInt = int64(int16(c))
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueAMD64_OpAMD64MOVLstoreconst_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVLstoreconst [sc] {s} (ADDQconst [off] ptr) mem)
// cond: ValAndOff(sc).canAdd(off)
// result: (MOVLstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
v.AuxInt = ValAndOff(a).Off()
v.Aux = s
v.AddArg(p)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v0.AuxInt = ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueAMD64_OpAMD64MOVLstoreconstidx1_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVLstoreconstidx1 [c] {sym} ptr (SHLQconst [2] idx) mem)
// cond:
// result: (MOVLstoreconstidx4 [c] {sym} ptr idx mem)
v.Aux = s
v.AddArg(p)
v.AddArg(i)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v0.AuxInt = ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueAMD64_OpAMD64MOVLstoreconstidx4_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVLstoreconstidx4 [x] {sym} (ADDQconst [c] ptr) idx mem)
// cond:
// result: (MOVLstoreconstidx4 [ValAndOff(x).add(c)] {sym} ptr idx mem)
v0.AuxInt = 2
v0.AddArg(i)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v1.AuxInt = ValAndOff(a).Val()&0xffffffff | ValAndOff(c).Val()<<32
v.AddArg(v1)
v.AddArg(mem)
func rewriteValueAMD64_OpAMD64ORL_40(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL (SHLL x (ANDLconst y [ 7])) (ANDL (SHRB x (NEGL (ADDLconst (ANDLconst y [ 7]) [ -8]))) (SBBLcarrymask (CMPLconst (NEGL (ADDLconst (ANDLconst y [ 7]) [ -8])) [ 8]))))
// cond: v.Type.Size() == 1
// result: (ROLB x y)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueAMD64_OpAMD64ORL_50(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL sh:(SHLLconst [8] x1:(MOVBload [i1] {s} p mem)) x0:(MOVBload [i0] {s} p mem))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWload [i0] {s} p mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORL_60(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL x0:(MOVBloadidx1 [i0] {s} idx p mem) sh:(SHLLconst [8] x1:(MOVBloadidx1 [i1] {s} idx p mem)))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWloadidx1 <v.Type> [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueAMD64_OpAMD64ORL_70(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL sh:(SHLLconst [16] x1:(MOVWloadidx1 [i1] {s} idx p mem)) x0:(MOVWloadidx1 [i0] {s} p idx mem))
// cond: i1 == i0+2 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVLloadidx1 [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORL_80(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) or:(ORL y s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem))))
// cond: i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j0] (MOVWloadidx1 [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueAMD64_OpAMD64ORL_90(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL sh:(SHLLconst [8] x0:(MOVBload [i0] {s} p mem)) x1:(MOVBload [i1] {s} p mem))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWload [i0] {s} p mem))
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBload [i0] {s} p mem)) or:(ORL s1:(SHLLconst [j1] x1:(MOVBload [i1] {s} p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBload [i0] {s} p mem)) or:(ORL y s1:(SHLLconst [j1] x1:(MOVBload [i1] {s} p mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL s1:(SHLLconst [j1] x1:(MOVBload [i1] {s} p mem)) y) s0:(SHLLconst [j0] x0:(MOVBload [i0] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL y s1:(SHLLconst [j1] x1:(MOVBload [i1] {s} p mem))) s0:(SHLLconst [j0] x0:(MOVBload [i0] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueAMD64_OpAMD64ORL_100(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL x1:(MOVBloadidx1 [i1] {s} idx p mem) sh:(SHLLconst [8] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWloadidx1 [i0] {s} p idx mem))
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueAMD64_OpAMD64ORL_110(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL sh:(SHLLconst [16] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))) r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem)))
// cond: i1 == i0+2 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (BSWAPL <v.Type> (MOVLloadidx1 [i0] {s} p idx mem))
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORL_120(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORL s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLLconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORL or:(ORL y s1:(SHLLconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))) s0:(SHLLconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORL <v.Type> (SHLLconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORL {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLLconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_20(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ x x)
// cond:
// result: x
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_30(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ or:(ORQ y s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem))) s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVWload [i0] {s} p mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_40(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ sh:(SHLQconst [8] x1:(MOVBloadidx1 [i1] {s} idx p mem)) x0:(MOVBloadidx1 [i0] {s} p idx mem))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWloadidx1 <v.Type> [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueAMD64_OpAMD64ORQ_50(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ sh:(SHLQconst [16] x1:(MOVWloadidx1 [i1] {s} idx p mem)) x0:(MOVWloadidx1 [i0] {s} idx p mem))
// cond: i1 == i0+2 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVLloadidx1 [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_60(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) or:(ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) y))
// cond: i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVWloadidx1 [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_70(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ or:(ORQ y s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem))) s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVWloadidx1 [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_80(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ s1:(SHLQconst [j1] x1:(MOVWloadidx1 [i1] {s} idx p mem)) or:(ORQ y s0:(SHLQconst [j0] x0:(MOVWloadidx1 [i0] {s} p idx mem))))
// cond: i1 == i0+2 && j1 == j0+16 && j0 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVLloadidx1 [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_90(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ or:(ORQ y s0:(SHLQconst [j0] x0:(MOVWloadidx1 [i0] {s} idx p mem))) s1:(SHLQconst [j1] x1:(MOVWloadidx1 [i1] {s} idx p mem)))
// cond: i1 == i0+2 && j1 == j0+16 && j0 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j0] (MOVLloadidx1 [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem)) or:(ORQ s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem)) or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p mem)) y) s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_100(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBload [i1] {s} p mem))) s0:(SHLQconst [j0] x0:(MOVBload [i0] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWload [i0] {s} p mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem))) or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p mem))) y))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem))) or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p mem)))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p mem))) y) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWload [i1] {s} p mem)))) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWload [i0] {s} p mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLload [i0] {s} p mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLload [i0] {s} p mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_110(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ sh:(SHLQconst [8] x0:(MOVBloadidx1 [i0] {s} idx p mem)) x1:(MOVBloadidx1 [i1] {s} p idx mem))
// cond: i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (ROLWconst <v.Type> [8] (MOVWloadidx1 [i0] {s} p idx mem))
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = 8
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_120(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ sh:(SHLQconst [16] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))) r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem)))
// cond: i1 == i0+2 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (BSWAPL <v.Type> (MOVLloadidx1 [i0] {s} p idx mem))
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPL, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
v0 := b.NewValue0(v.Pos, OpAMD64BSWAPQ, v.Type)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQloadidx1, typ.UInt64)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_130(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)) or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)) or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_140(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem)) y) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem)) y) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} p idx mem))) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] x1:(MOVBloadidx1 [i1] {s} idx p mem))) s0:(SHLQconst [j0] x0:(MOVBloadidx1 [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <types.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (ROLWconst <typ.UInt16> [8] (MOVWloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64ROLWconst, typ.UInt16)
v2.AuxInt = 8
- v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, types.UInt16)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVWloadidx1, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))) or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem))) y))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))) or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem))) y))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))) or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem))) y))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))) or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem))) y))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))) or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem)))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_150(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))) or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem)))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))) or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem)))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))) or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem)))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
s0 := v.Args[0]
if s0.Op != OpAMD64SHLQconst {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem))) y) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem))) y) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem)))) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem)))) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} p idx mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem))) y) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem))) y) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} p idx mem)))) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueAMD64_OpAMD64ORQ_160(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORQ or:(ORQ y s1:(SHLQconst [j1] r1:(ROLWconst [8] x1:(MOVWloadidx1 [i1] {s} idx p mem)))) s0:(SHLQconst [j0] r0:(ROLWconst [8] x0:(MOVWloadidx1 [i0] {s} idx p mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
- // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <types.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
+ // result: @mergePoint(b,x0,x1) (ORQ <v.Type> (SHLQconst <v.Type> [j1] (BSWAPL <typ.UInt32> (MOVLloadidx1 [i0] {s} p idx mem))) y)
for {
or := v.Args[0]
if or.Op != OpAMD64ORQ {
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SHLQconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64BSWAPL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVLloadidx1, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueAMD64_OpAtomicAdd32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicAdd32 ptr val mem)
// cond:
// result: (AddTupleFirst32 (XADDLlock val ptr mem) val)
val := v.Args[1]
mem := v.Args[2]
v.reset(OpAMD64AddTupleFirst32)
- v0 := b.NewValue0(v.Pos, OpAMD64XADDLlock, MakeTuple(types.UInt32, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpAMD64XADDLlock, types.NewTuple(typ.UInt32, types.TypeMem))
v0.AddArg(val)
v0.AddArg(ptr)
v0.AddArg(mem)
func rewriteValueAMD64_OpAtomicAdd64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicAdd64 ptr val mem)
// cond:
// result: (AddTupleFirst64 (XADDQlock val ptr mem) val)
val := v.Args[1]
mem := v.Args[2]
v.reset(OpAMD64AddTupleFirst64)
- v0 := b.NewValue0(v.Pos, OpAMD64XADDQlock, MakeTuple(types.UInt64, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpAMD64XADDQlock, types.NewTuple(typ.UInt64, types.TypeMem))
v0.AddArg(val)
v0.AddArg(ptr)
v0.AddArg(mem)
func rewriteValueAMD64_OpAtomicStore32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicStore32 ptr val mem)
// cond:
- // result: (Select1 (XCHGL <MakeTuple(types.UInt32,TypeMem)> val ptr mem))
+ // result: (Select1 (XCHGL <types.NewTuple(typ.UInt32,types.TypeMem)> val ptr mem))
for {
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64XCHGL, MakeTuple(types.UInt32, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpAMD64XCHGL, types.NewTuple(typ.UInt32, types.TypeMem))
v0.AddArg(val)
v0.AddArg(ptr)
v0.AddArg(mem)
func rewriteValueAMD64_OpAtomicStore64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicStore64 ptr val mem)
// cond:
- // result: (Select1 (XCHGQ <MakeTuple(types.UInt64,TypeMem)> val ptr mem))
+ // result: (Select1 (XCHGQ <types.NewTuple(typ.UInt64,types.TypeMem)> val ptr mem))
for {
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64XCHGQ, MakeTuple(types.UInt64, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpAMD64XCHGQ, types.NewTuple(typ.UInt64, types.TypeMem))
v0.AddArg(val)
v0.AddArg(ptr)
v0.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicStorePtrNoWB ptr val mem)
// cond: config.PtrSize == 8
- // result: (Select1 (XCHGQ <MakeTuple(types.BytePtr,TypeMem)> val ptr mem))
+ // result: (Select1 (XCHGQ <types.NewTuple(typ.BytePtr,types.TypeMem)> val ptr mem))
for {
ptr := v.Args[0]
val := v.Args[1]
break
}
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64XCHGQ, MakeTuple(types.BytePtr, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpAMD64XCHGQ, types.NewTuple(typ.BytePtr, types.TypeMem))
v0.AddArg(val)
v0.AddArg(ptr)
v0.AddArg(mem)
}
// match: (AtomicStorePtrNoWB ptr val mem)
// cond: config.PtrSize == 4
- // result: (Select1 (XCHGL <MakeTuple(types.BytePtr,TypeMem)> val ptr mem))
+ // result: (Select1 (XCHGL <types.NewTuple(typ.BytePtr,types.TypeMem)> val ptr mem))
for {
ptr := v.Args[0]
val := v.Args[1]
break
}
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64XCHGL, MakeTuple(types.BytePtr, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpAMD64XCHGL, types.NewTuple(typ.BytePtr, types.TypeMem))
v0.AddArg(val)
v0.AddArg(ptr)
v0.AddArg(mem)
func rewriteValueAMD64_OpBitLen32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitLen32 x)
// cond:
- // result: (BitLen64 (MOVLQZX <types.UInt64> x))
+ // result: (BitLen64 (MOVLQZX <typ.UInt64> x))
for {
x := v.Args[0]
v.reset(OpBitLen64)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLQZX, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLQZX, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValueAMD64_OpBitLen64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitLen64 <t> x)
// cond:
- // result: (ADDQconst [1] (CMOVQEQ <t> (Select0 <t> (BSRQ x)) (MOVQconst <t> [-1]) (Select1 <TypeFlags> (BSRQ x))))
+ // result: (ADDQconst [1] (CMOVQEQ <t> (Select0 <t> (BSRQ x)) (MOVQconst <t> [-1]) (Select1 <types.TypeFlags> (BSRQ x))))
for {
t := v.Type
x := v.Args[0]
v.AuxInt = 1
v0 := b.NewValue0(v.Pos, OpAMD64CMOVQEQ, t)
v1 := b.NewValue0(v.Pos, OpSelect0, t)
- v2 := b.NewValue0(v.Pos, OpAMD64BSRQ, MakeTuple(types.UInt64, TypeFlags))
+ v2 := b.NewValue0(v.Pos, OpAMD64BSRQ, types.NewTuple(typ.UInt64, types.TypeFlags))
v2.AddArg(x)
v1.AddArg(v2)
v0.AddArg(v1)
v3 := b.NewValue0(v.Pos, OpAMD64MOVQconst, t)
v3.AuxInt = -1
v0.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpSelect1, TypeFlags)
- v5 := b.NewValue0(v.Pos, OpAMD64BSRQ, MakeTuple(types.UInt64, TypeFlags))
+ v4 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
+ v5 := b.NewValue0(v.Pos, OpAMD64BSRQ, types.NewTuple(typ.UInt64, types.TypeFlags))
v5.AddArg(x)
v4.AddArg(v5)
v0.AddArg(v4)
func rewriteValueAMD64_OpCtz32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Ctz32 x)
// cond:
- // result: (Select0 (BSFQ (ORQ <types.UInt64> (MOVQconst [1<<32]) x)))
+ // result: (Select0 (BSFQ (ORQ <typ.UInt64> (MOVQconst [1<<32]) x)))
for {
x := v.Args[0]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64BSFQ, MakeTuple(types.UInt64, TypeFlags))
- v1 := b.NewValue0(v.Pos, OpAMD64ORQ, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64BSFQ, types.NewTuple(typ.UInt64, types.TypeFlags))
+ v1 := b.NewValue0(v.Pos, OpAMD64ORQ, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v2.AuxInt = 1 << 32
v1.AddArg(v2)
v1.AddArg(x)
func rewriteValueAMD64_OpCtz64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Ctz64 <t> x)
// cond:
- // result: (CMOVQEQ (Select0 <t> (BSFQ x)) (MOVQconst <t> [64]) (Select1 <TypeFlags> (BSFQ x)))
+ // result: (CMOVQEQ (Select0 <t> (BSFQ x)) (MOVQconst <t> [64]) (Select1 <types.TypeFlags> (BSFQ x)))
for {
t := v.Type
x := v.Args[0]
v.reset(OpAMD64CMOVQEQ)
v0 := b.NewValue0(v.Pos, OpSelect0, t)
- v1 := b.NewValue0(v.Pos, OpAMD64BSFQ, MakeTuple(types.UInt64, TypeFlags))
+ v1 := b.NewValue0(v.Pos, OpAMD64BSFQ, types.NewTuple(typ.UInt64, types.TypeFlags))
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpAMD64MOVQconst, t)
v2.AuxInt = 64
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSelect1, TypeFlags)
- v4 := b.NewValue0(v.Pos, OpAMD64BSFQ, MakeTuple(types.UInt64, TypeFlags))
+ v3 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpAMD64BSFQ, types.NewTuple(typ.UInt64, types.TypeFlags))
v4.AddArg(x)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueAMD64_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (Select0 (DIVW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVW, MakeTuple(types.Int16, types.Int16))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVW, types.NewTuple(typ.Int16, typ.Int16))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (Select0 (DIVWU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, MakeTuple(types.UInt16, types.UInt16))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, types.NewTuple(typ.UInt16, typ.UInt16))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32 x y)
// cond:
// result: (Select0 (DIVL x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVL, MakeTuple(types.Int32, types.Int32))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVL, types.NewTuple(typ.Int32, typ.Int32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32u x y)
// cond:
// result: (Select0 (DIVLU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVLU, MakeTuple(types.UInt32, types.UInt32))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVLU, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div64 x y)
// cond:
// result: (Select0 (DIVQ x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVQ, MakeTuple(types.Int64, types.Int64))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVQ, types.NewTuple(typ.Int64, typ.Int64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv64u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div64u x y)
// cond:
// result: (Select0 (DIVQU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVQU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVQU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (Select0 (DIVW (SignExt8to16 x) (SignExt8to16 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVW, MakeTuple(types.Int16, types.Int16))
- v1 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVW, types.NewTuple(typ.Int16, typ.Int16))
+ v1 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueAMD64_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (Select0 (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, MakeTuple(types.UInt16, types.UInt16))
- v1 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, types.NewTuple(typ.UInt16, typ.UInt16))
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETEQ)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGEF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGEF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETAE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETG)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETA)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETG)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETA)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETG)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETA)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETG)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETA)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64TESTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64TESTQ, types.TypeFlags)
v0.AddArg(p)
v0.AddArg(p)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64TESTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64TESTL, types.TypeFlags)
v0.AddArg(p)
v0.AddArg(p)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGEF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGEF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETLE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETBE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETGF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETL)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETB)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueAMD64_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (Select1 (DIVW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVW, MakeTuple(types.Int16, types.Int16))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVW, types.NewTuple(typ.Int16, typ.Int16))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (Select1 (DIVWU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, MakeTuple(types.UInt16, types.UInt16))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, types.NewTuple(typ.UInt16, typ.UInt16))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpMod32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32 x y)
// cond:
// result: (Select1 (DIVL x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVL, MakeTuple(types.Int32, types.Int32))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVL, types.NewTuple(typ.Int32, typ.Int32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpMod32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32u x y)
// cond:
// result: (Select1 (DIVLU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVLU, MakeTuple(types.UInt32, types.UInt32))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVLU, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpMod64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod64 x y)
// cond:
// result: (Select1 (DIVQ x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVQ, MakeTuple(types.Int64, types.Int64))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVQ, types.NewTuple(typ.Int64, typ.Int64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpMod64u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod64u x y)
// cond:
// result: (Select1 (DIVQU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVQU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVQU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueAMD64_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (Select1 (DIVW (SignExt8to16 x) (SignExt8to16 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVW, MakeTuple(types.Int16, types.Int16))
- v1 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVW, types.NewTuple(typ.Int16, typ.Int16))
+ v1 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to16, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to16, typ.Int16)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueAMD64_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (Select1 (DIVWU (ZeroExt8to16 x) (ZeroExt8to16 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, MakeTuple(types.UInt16, types.UInt16))
- v1 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64DIVWU, types.NewTuple(typ.UInt16, typ.UInt16))
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to16, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to16, typ.UInt16)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueAMD64_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpAMD64MOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVBload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVBload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpAMD64MOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpAMD64MOVLstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpAMD64MOVQstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpAMD64MOVOstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVOload, TypeInt128)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVOload, types.TypeInt128)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpAMD64MOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVBload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVBload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpAMD64MOVBstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVBload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVBload, typ.UInt8)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpAMD64MOVWstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWload, typ.UInt16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpAMD64MOVLstore)
v.AuxInt = 3
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v0.AuxInt = 3
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVLstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVLstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVLload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [s] dst src mem)
// cond: s > 8 && s < 16
// result: (MOVQstore [s-8] dst (MOVQload [s-8] src mem) (MOVQstore dst (MOVQload src mem) mem))
v.reset(OpAMD64MOVQstore)
v.AuxInt = s - 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v0.AuxInt = s - 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v1.AuxInt = s % 16
v1.AddArg(src)
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVQstore, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVQstore, types.TypeMem)
v2.AddArg(dst)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVQload, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVQload, typ.UInt64)
v3.AddArg(src)
v3.AddArg(mem)
v2.AddArg(v3)
v1.AuxInt = s % 16
v1.AddArg(src)
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVOstore, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVOstore, types.TypeMem)
v2.AddArg(dst)
- v3 := b.NewValue0(v.Pos, OpAMD64MOVOload, TypeInt128)
+ v3 := b.NewValue0(v.Pos, OpAMD64MOVOload, types.TypeInt128)
v3.AddArg(src)
v3.AddArg(mem)
v2.AddArg(v3)
v.reset(OpAMD64REPMOVSQ)
v.AddArg(dst)
v.AddArg(src)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v0.AuxInt = s / 8
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueAMD64_OpNeg32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neg32F x)
// cond:
- // result: (PXOR x (MOVSSconst <types.Float32> [f2i(math.Copysign(0, -1))]))
+ // result: (PXOR x (MOVSSconst <typ.Float32> [f2i(math.Copysign(0, -1))]))
for {
x := v.Args[0]
v.reset(OpAMD64PXOR)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVSSconst, types.Float32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVSSconst, typ.Float32)
v0.AuxInt = f2i(math.Copysign(0, -1))
v.AddArg(v0)
return true
func rewriteValueAMD64_OpNeg64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neg64F x)
// cond:
- // result: (PXOR x (MOVSDconst <types.Float64> [f2i(math.Copysign(0, -1))]))
+ // result: (PXOR x (MOVSDconst <typ.Float64> [f2i(math.Copysign(0, -1))]))
for {
x := v.Args[0]
v.reset(OpAMD64PXOR)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVSDconst, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVSDconst, typ.Float64)
v0.AuxInt = f2i(math.Copysign(0, -1))
v.AddArg(v0)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNEF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNEF)
- v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64UCOMISD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPB, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpAMD64SETNE)
- v0 := b.NewValue0(v.Pos, OpAMD64CMPL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64CMPL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OffPtr [off] ptr)
// cond: config.PtrSize == 8 && is32Bit(off)
// result: (ADDQconst [off] ptr)
break
}
v.reset(OpAMD64ADDQ)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v0.AuxInt = off
v.AddArg(v0)
v.AddArg(ptr)
func rewriteValueAMD64_OpPopCount16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (PopCount16 x)
// cond:
- // result: (POPCNTL (MOVWQZX <types.UInt32> x))
+ // result: (POPCNTL (MOVWQZX <typ.UInt32> x))
for {
x := v.Args[0]
v.reset(OpAMD64POPCNTL)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWQZX, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWQZX, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValueAMD64_OpPopCount8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (PopCount8 x)
// cond:
- // result: (POPCNTL (MOVBQZX <types.UInt32> x))
+ // result: (POPCNTL (MOVBQZX <typ.UInt32> x))
for {
x := v.Args[0]
v.reset(OpAMD64POPCNTL)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVBQZX, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVBQZX, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 16
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTQ, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v3.AuxInt = 16
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 32
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTQ, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v3.AuxInt = 32
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTQ, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v2.AuxInt = 8
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPWconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPLconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTQ, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBQcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPQconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpAMD64NOTL, y.Type)
v2 := b.NewValue0(v.Pos, OpAMD64SBBLcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpAMD64CMPBconst, types.TypeFlags)
v3.AuxInt = 8
v3.AddArg(y)
v2.AddArg(v3)
}
func rewriteValueAMD64_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (MOVSDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpAMD64MOVSDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (MOVSSstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpAMD64MOVSSstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8
+ // cond: t.(*types.Type).Size() == 8
// result: (MOVQstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8) {
+ if !(t.(*types.Type).Size() == 8) {
break
}
v.reset(OpAMD64MOVQstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4
+ // cond: t.(*types.Type).Size() == 4
// result: (MOVLstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4) {
+ if !(t.(*types.Type).Size() == 4) {
break
}
v.reset(OpAMD64MOVLstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpAMD64MOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpAMD64MOVBstore)
v.reset(OpAMD64MOVBstoreconst)
v.AuxInt = makeValAndOff(0, 2)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVWstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVWstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpAMD64MOVBstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpAMD64MOVWstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpAMD64MOVLstoreconst)
v.AuxInt = makeValAndOff(0, 3)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVLstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVLstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v0.AuxInt = s % 8
v0.AddArg(destptr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(destptr)
v1.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [16] destptr mem)
// cond:
// result: (MOVQstoreconst [makeValAndOff(0,8)] destptr (MOVQstoreconst [0] destptr mem))
v.reset(OpAMD64MOVQstoreconst)
v.AuxInt = makeValAndOff(0, 8)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpAMD64MOVQstoreconst)
v.AuxInt = makeValAndOff(0, 16)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v0.AuxInt = makeValAndOff(0, 8)
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(destptr)
v1.AddArg(mem)
v.reset(OpAMD64MOVQstoreconst)
v.AuxInt = makeValAndOff(0, 24)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v0.AuxInt = makeValAndOff(0, 16)
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v1.AuxInt = makeValAndOff(0, 8)
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVQstoreconst, types.TypeMem)
v2.AuxInt = 0
v2.AddArg(destptr)
v2.AddArg(mem)
v0.AuxInt = 8
v0.AddArg(destptr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQstore, types.TypeMem)
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpAMD64DUFFZERO)
v.AuxInt = s
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVOconst, TypeInt128)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVOconst, types.TypeInt128)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
}
v.reset(OpAMD64REPSTOSQ)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v0.AuxInt = s / 8
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpAMD64MOVQconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAMD64MOVQconst, typ.UInt64)
v1.AuxInt = 0
v.AddArg(v1)
v.AddArg(mem)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockAMD64EQ:
// match: (EQ (TESTL (SHLL (MOVLconst [1]) x) y))
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64UGE
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
_ = v
cond := b.Control
b.Kind = BlockAMD64NE
- v0 := b.NewValue0(v.Pos, OpAMD64TESTB, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64TESTB, types.TypeFlags)
v0.AddArg(cond)
v0.AddArg(cond)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTL, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTQ, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQ, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTLconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
break
}
b.Kind = BlockAMD64ULT
- v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpAMD64BTQconst, types.TypeFlags)
v0.AuxInt = log2(c)
v0.AddArg(x)
b.SetControl(v0)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValueARM(v *Value) bool {
switch v.Op {
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
y := v_0.Args[0]
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPshiftLL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPshiftLL, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v0.AddArg(y)
y := v_0.Args[0]
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPshiftRL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPshiftRL, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v0.AddArg(y)
y := v_0.Args[0]
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPshiftRA, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPshiftRA, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v0.AddArg(y)
z := v_0.Args[1]
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPshiftLLreg, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPshiftLLreg, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v0.AddArg(z)
z := v_0.Args[1]
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPshiftRLreg, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPshiftRLreg, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v0.AddArg(z)
z := v_0.Args[1]
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPshiftRAreg, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPshiftRAreg, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v0.AddArg(z)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARMSLLconst, x.Type)
v1.AuxInt = d
x := v.Args[1]
y := v.Args[2]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARMSLL, x.Type)
v1.AddArg(x)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARMSRAconst, x.Type)
v1.AuxInt = d
x := v.Args[1]
y := v.Args[2]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARMSRA, x.Type)
v1.AddArg(x)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARMSRLconst, x.Type)
v1.AuxInt = d
x := v.Args[1]
y := v.Args[2]
v.reset(OpARMInvertFlags)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARMSRL, x.Type)
v1.AddArg(x)
func rewriteValueARM_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (Div32 (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpDiv32)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (Div32u (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpDiv32u)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpDiv32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32 x y)
// cond:
- // result: (SUB (XOR <types.UInt32> (Select0 <types.UInt32> (CALLudiv (SUB <types.UInt32> (XOR x <types.UInt32> (Signmask x)) (Signmask x)) (SUB <types.UInt32> (XOR y <types.UInt32> (Signmask y)) (Signmask y)))) (Signmask (XOR <types.UInt32> x y))) (Signmask (XOR <types.UInt32> x y)))
+ // result: (SUB (XOR <typ.UInt32> (Select0 <typ.UInt32> (CALLudiv (SUB <typ.UInt32> (XOR x <typ.UInt32> (Signmask x)) (Signmask x)) (SUB <typ.UInt32> (XOR y <typ.UInt32> (Signmask y)) (Signmask y)))) (Signmask (XOR <typ.UInt32> x y))) (Signmask (XOR <typ.UInt32> x y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSUB)
- v0 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpSelect0, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpARMCALLudiv, MakeTuple(types.UInt32, types.UInt32))
- v3 := b.NewValue0(v.Pos, OpARMSUB, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpSelect0, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARMCALLudiv, types.NewTuple(typ.UInt32, typ.UInt32))
+ v3 := b.NewValue0(v.Pos, OpARMSUB, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v5 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v5.AddArg(x)
v4.AddArg(v5)
v3.AddArg(v4)
- v6 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v6 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v6.AddArg(x)
v3.AddArg(v6)
v2.AddArg(v3)
- v7 := b.NewValue0(v.Pos, OpARMSUB, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpARMSUB, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
v8.AddArg(y)
- v9 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v9 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v9.AddArg(y)
v8.AddArg(v9)
v7.AddArg(v8)
- v10 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v10 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v10.AddArg(y)
v7.AddArg(v10)
v2.AddArg(v7)
v1.AddArg(v2)
v0.AddArg(v1)
- v11 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
- v12 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
+ v11 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
+ v12 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
v12.AddArg(x)
v12.AddArg(y)
v11.AddArg(v12)
v0.AddArg(v11)
v.AddArg(v0)
- v13 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
- v14 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
+ v13 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
+ v14 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
v14.AddArg(x)
v14.AddArg(y)
v13.AddArg(v14)
func rewriteValueARM_OpDiv32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32u x y)
// cond:
- // result: (Select0 <types.UInt32> (CALLudiv x y))
+ // result: (Select0 <typ.UInt32> (CALLudiv x y))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v.Type = types.UInt32
- v0 := b.NewValue0(v.Pos, OpARMCALLudiv, MakeTuple(types.UInt32, types.UInt32))
+ v.Type = typ.UInt32
+ v0 := b.NewValue0(v.Pos, OpARMCALLudiv, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (Div32 (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpDiv32)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (Div32u (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpDiv32u)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpEq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq16 x y)
// cond:
// result: (Equal (CMP (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpEq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq8 x y)
// cond:
// result: (Equal (CMP (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpEqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqB x y)
// cond:
- // result: (XORconst [1] (XOR <types.Bool> x y))
+ // result: (XORconst [1] (XOR <typ.Bool> x y))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpARMXOR, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpARMXOR, typ.Bool)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpGeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16 x y)
// cond:
// result: (GreaterEqual (CMP (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpGeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16U x y)
// cond:
// result: (GreaterEqualU (CMP (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpGeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8 x y)
// cond:
// result: (GreaterEqual (CMP (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpGeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8U x y)
// cond:
// result: (GreaterEqualU (CMP (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpGreater16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16 x y)
// cond:
// result: (GreaterThan (CMP (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpGreater16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16U x y)
// cond:
// result: (GreaterThanU (CMP (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMPF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpGreater8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8 x y)
// cond:
// result: (GreaterThan (CMP (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpGreater8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8U x y)
// cond:
// result: (GreaterThanU (CMP (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpARMLessThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
for {
ptr := v.Args[0]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = 0
v0.AddArg(ptr)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpARMLessEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
func rewriteValueARM_OpLeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16 x y)
// cond:
// result: (LessEqual (CMP (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpLeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16U x y)
// cond:
// result: (LessEqualU (CMP (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPF, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueARM_OpLeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8 x y)
// cond:
// result: (LessEqual (CMP (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpLeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8U x y)
// cond:
// result: (LessEqualU (CMP (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessEqualU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpLess16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16 x y)
// cond:
// result: (LessThan (CMP (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessThan)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpLess16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16U x y)
// cond:
// result: (LessThanU (CMP (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessThan)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMPF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPF, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMGreaterThan)
- v0 := b.NewValue0(v.Pos, OpARMCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueARM_OpLess8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8 x y)
// cond:
// result: (LessThan (CMP (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessThan)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpLess8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8U x y)
// cond:
// result: (LessThanU (CMP (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMLessThanU)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM_OpLsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x16 x y)
// cond:
// result: (CMOVWHSconst (SLL <x.Type> x (ZeroExt16to32 y)) (CMPconst [256] (ZeroExt16to32 y)) [0])
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSLL, x.Type)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(y)
v2.AddArg(v3)
v.AddArg(v2)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueARM_OpLsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x8 x y)
// cond:
// result: (SLL x (ZeroExt8to32 y))
y := v.Args[1]
v.reset(OpARMSLL)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
return true
func rewriteValueARM_OpLsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x16 x y)
// cond:
// result: (CMOVWHSconst (SLL <x.Type> x (ZeroExt16to32 y)) (CMPconst [256] (ZeroExt16to32 y)) [0])
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSLL, x.Type)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(y)
v2.AddArg(v3)
v.AddArg(v2)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueARM_OpLsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x8 x y)
// cond:
// result: (SLL x (ZeroExt8to32 y))
y := v.Args[1]
v.reset(OpARMSLL)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
return true
func rewriteValueARM_OpLsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x16 x y)
// cond:
// result: (CMOVWHSconst (SLL <x.Type> x (ZeroExt16to32 y)) (CMPconst [256] (ZeroExt16to32 y)) [0])
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSLL, x.Type)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(y)
v2.AddArg(v3)
v.AddArg(v2)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueARM_OpLsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x8 x y)
// cond:
// result: (SLL x (ZeroExt8to32 y))
y := v.Args[1]
v.reset(OpARMSLL)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
return true
func rewriteValueARM_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (Mod32 (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (Mod32u (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32u)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpMod32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32 x y)
// cond:
- // result: (SUB (XOR <types.UInt32> (Select1 <types.UInt32> (CALLudiv (SUB <types.UInt32> (XOR <types.UInt32> x (Signmask x)) (Signmask x)) (SUB <types.UInt32> (XOR <types.UInt32> y (Signmask y)) (Signmask y)))) (Signmask x)) (Signmask x))
+ // result: (SUB (XOR <typ.UInt32> (Select1 <typ.UInt32> (CALLudiv (SUB <typ.UInt32> (XOR <typ.UInt32> x (Signmask x)) (Signmask x)) (SUB <typ.UInt32> (XOR <typ.UInt32> y (Signmask y)) (Signmask y)))) (Signmask x)) (Signmask x))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSUB)
- v0 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpSelect1, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpARMCALLudiv, MakeTuple(types.UInt32, types.UInt32))
- v3 := b.NewValue0(v.Pos, OpARMSUB, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpSelect1, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARMCALLudiv, types.NewTuple(typ.UInt32, typ.UInt32))
+ v3 := b.NewValue0(v.Pos, OpARMSUB, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v5 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v5.AddArg(x)
v4.AddArg(v5)
v3.AddArg(v4)
- v6 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v6 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v6.AddArg(x)
v3.AddArg(v6)
v2.AddArg(v3)
- v7 := b.NewValue0(v.Pos, OpARMSUB, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpARMXOR, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpARMSUB, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpARMXOR, typ.UInt32)
v8.AddArg(y)
- v9 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v9 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v9.AddArg(y)
v8.AddArg(v9)
v7.AddArg(v8)
- v10 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v10 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v10.AddArg(y)
v7.AddArg(v10)
v2.AddArg(v7)
v1.AddArg(v2)
v0.AddArg(v1)
- v11 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v11 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v11.AddArg(x)
v0.AddArg(v11)
v.AddArg(v0)
- v12 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v12 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v12.AddArg(x)
v.AddArg(v12)
return true
func rewriteValueARM_OpMod32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32u x y)
// cond:
- // result: (Select1 <types.UInt32> (CALLudiv x y))
+ // result: (Select1 <typ.UInt32> (CALLudiv x y))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v.Type = types.UInt32
- v0 := b.NewValue0(v.Pos, OpARMCALLudiv, MakeTuple(types.UInt32, types.UInt32))
+ v.Type = typ.UInt32
+ v0 := b.NewValue0(v.Pos, OpARMCALLudiv, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (Mod32 (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (Mod32u (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32u)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpARMMOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [2] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore dst (MOVHUload src mem) mem)
for {
if v.AuxInt != 2 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpARMMOVHstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVHUload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpARMMOVHUload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpARMMOVBstore)
v.AuxInt = 1
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v0.AuxInt = 1
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v2 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
return true
}
// match: (Move [4] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore dst (MOVWload src mem) mem)
for {
if v.AuxInt != 4 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpARMMOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVWload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [4] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [2] dst (MOVHUload [2] src mem) (MOVHstore dst (MOVHUload src mem) mem))
for {
if v.AuxInt != 4 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpARMMOVHstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVHUload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpARMMOVHUload, typ.UInt16)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVHstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARMMOVHUload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpARMMOVHUload, typ.UInt16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpARMMOVBstore)
v.AuxInt = 3
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v0.AuxInt = 3
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v2 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v2.AuxInt = 2
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v3.AuxInt = 1
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v4 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v4.AuxInt = 1
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v5.AddArg(dst)
- v6 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v6 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v6.AddArg(src)
v6.AddArg(mem)
v5.AddArg(v6)
v.reset(OpARMMOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v1.AuxInt = 1
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v2 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v2.AuxInt = 1
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpARMMOVBUload, types.UInt8)
+ v4 := b.NewValue0(v.Pos, OpARMMOVBUload, typ.UInt8)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [s] {t} dst src mem)
- // cond: s%4 == 0 && s > 4 && s <= 512 && t.(Type).Alignment()%4 == 0 && !config.noDuffDevice
+ // cond: s%4 == 0 && s > 4 && s <= 512 && t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice
// result: (DUFFCOPY [8 * (128 - int64(s/4))] dst src mem)
for {
s := v.AuxInt
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(s%4 == 0 && s > 4 && s <= 512 && t.(Type).Alignment()%4 == 0 && !config.noDuffDevice) {
+ if !(s%4 == 0 && s > 4 && s <= 512 && t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice) {
break
}
v.reset(OpARMDUFFCOPY)
return true
}
// match: (Move [s] {t} dst src mem)
- // cond: (s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0
- // result: (LoweredMove [t.(Type).Alignment()] dst src (ADDconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)]) mem)
+ // cond: (s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0
+ // result: (LoweredMove [t.(*types.Type).Alignment()] dst src (ADDconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)]) mem)
for {
s := v.AuxInt
t := v.Aux
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !((s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0) {
+ if !((s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0) {
break
}
v.reset(OpARMLoweredMove)
- v.AuxInt = t.(Type).Alignment()
+ v.AuxInt = t.(*types.Type).Alignment()
v.AddArg(dst)
v.AddArg(src)
v0 := b.NewValue0(v.Pos, OpARMADDconst, src.Type)
- v0.AuxInt = s - moveSize(t.(Type).Alignment(), config)
+ v0.AuxInt = s - moveSize(t.(*types.Type).Alignment(), config)
v0.AddArg(src)
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueARM_OpNeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq16 x y)
// cond:
// result: (NotEqual (CMP (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpNeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq8 x y)
// cond:
// result: (NotEqual (CMP (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMNotEqual)
- v0 := b.NewValue0(v.Pos, OpARMCMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpRsh16Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux16 x y)
// cond:
// result: (CMOVWHSconst (SRL <x.Type> (ZeroExt16to32 x) (ZeroExt16to32 y)) (CMPconst [256] (ZeroExt16to32 y)) [0])
v.reset(OpARMCMOVWHSconst)
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSRL, x.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v3.AuxInt = 256
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM_OpRsh16Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux32 x y)
// cond:
// result: (CMOVWHSconst (SRL <x.Type> (ZeroExt16to32 x) y) (CMPconst [256] y) [0])
v.reset(OpARMCMOVWHSconst)
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSRL, x.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 x (Const64 [c]))
// cond: uint64(c) < 16
- // result: (SRLconst (SLLconst <types.UInt32> x [16]) [c+16])
+ // result: (SRLconst (SLLconst <typ.UInt32> x [16]) [c+16])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpARMSRLconst)
v.AuxInt = c + 16
- v0 := b.NewValue0(v.Pos, OpARMSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMSLLconst, typ.UInt32)
v0.AuxInt = 16
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueARM_OpRsh16Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux8 x y)
// cond:
// result: (SRL (ZeroExt16to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRL)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpRsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x16 x y)
// cond:
// result: (SRAcond (SignExt16to32 x) (ZeroExt16to32 y) (CMPconst [256] (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRAcond)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(y)
v2.AddArg(v3)
v.AddArg(v2)
func rewriteValueARM_OpRsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x32 x y)
// cond:
// result: (SRAcond (SignExt16to32 x) y (CMPconst [256] y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRAcond)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueARM_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 x (Const64 [c]))
// cond: uint64(c) < 16
- // result: (SRAconst (SLLconst <types.UInt32> x [16]) [c+16])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [16]) [c+16])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpARMSRAconst)
v.AuxInt = c + 16
- v0 := b.NewValue0(v.Pos, OpARMSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMSLLconst, typ.UInt32)
v0.AuxInt = 16
v0.AddArg(x)
v.AddArg(v0)
}
// match: (Rsh16x64 x (Const64 [c]))
// cond: uint64(c) >= 16
- // result: (SRAconst (SLLconst <types.UInt32> x [16]) [31])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [16]) [31])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpARMSRAconst)
v.AuxInt = 31
- v0 := b.NewValue0(v.Pos, OpARMSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMSLLconst, typ.UInt32)
v0.AuxInt = 16
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueARM_OpRsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x8 x y)
// cond:
// result: (SRA (SignExt16to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpRsh32Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux16 x y)
// cond:
// result: (CMOVWHSconst (SRL <x.Type> x (ZeroExt16to32 y)) (CMPconst [256] (ZeroExt16to32 y)) [0])
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSRL, x.Type)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(y)
v2.AddArg(v3)
v.AddArg(v2)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueARM_OpRsh32Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux8 x y)
// cond:
// result: (SRL x (ZeroExt8to32 y))
y := v.Args[1]
v.reset(OpARMSRL)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
return true
func rewriteValueARM_OpRsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x16 x y)
// cond:
// result: (SRAcond x (ZeroExt16to32 y) (CMPconst [256] (ZeroExt16to32 y)))
y := v.Args[1]
v.reset(OpARMSRAcond)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v1.AddArg(v2)
v.AddArg(v1)
v.reset(OpARMSRAcond)
v.AddArg(x)
v.AddArg(y)
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = 256
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM_OpRsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x8 x y)
// cond:
// result: (SRA x (ZeroExt8to32 y))
y := v.Args[1]
v.reset(OpARMSRA)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
return true
func rewriteValueARM_OpRsh8Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux16 x y)
// cond:
// result: (CMOVWHSconst (SRL <x.Type> (ZeroExt8to32 x) (ZeroExt16to32 y)) (CMPconst [256] (ZeroExt16to32 y)) [0])
v.reset(OpARMCMOVWHSconst)
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSRL, x.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v3.AuxInt = 256
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM_OpRsh8Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux32 x y)
// cond:
// result: (CMOVWHSconst (SRL <x.Type> (ZeroExt8to32 x) y) (CMPconst [256] y) [0])
v.reset(OpARMCMOVWHSconst)
v.AuxInt = 0
v0 := b.NewValue0(v.Pos, OpARMSRL, x.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 x (Const64 [c]))
// cond: uint64(c) < 8
- // result: (SRLconst (SLLconst <types.UInt32> x [24]) [c+24])
+ // result: (SRLconst (SLLconst <typ.UInt32> x [24]) [c+24])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpARMSRLconst)
v.AuxInt = c + 24
- v0 := b.NewValue0(v.Pos, OpARMSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMSLLconst, typ.UInt32)
v0.AuxInt = 24
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueARM_OpRsh8Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux8 x y)
// cond:
// result: (SRL (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRL)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM_OpRsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x16 x y)
// cond:
// result: (SRAcond (SignExt8to32 x) (ZeroExt16to32 y) (CMPconst [256] (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRAcond)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v2.AuxInt = 256
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(y)
v2.AddArg(v3)
v.AddArg(v2)
func rewriteValueARM_OpRsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x32 x y)
// cond:
// result: (SRAcond (SignExt8to32 x) y (CMPconst [256] y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRAcond)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v1.AuxInt = 256
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueARM_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 x (Const64 [c]))
// cond: uint64(c) < 8
- // result: (SRAconst (SLLconst <types.UInt32> x [24]) [c+24])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [24]) [c+24])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpARMSRAconst)
v.AuxInt = c + 24
- v0 := b.NewValue0(v.Pos, OpARMSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMSLLconst, typ.UInt32)
v0.AuxInt = 24
v0.AddArg(x)
v.AddArg(v0)
}
// match: (Rsh8x64 x (Const64 [c]))
// cond: uint64(c) >= 8
- // result: (SRAconst (SLLconst <types.UInt32> x [24]) [31])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [24]) [31])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpARMSRAconst)
v.AuxInt = 31
- v0 := b.NewValue0(v.Pos, OpARMSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMSLLconst, typ.UInt32)
v0.AuxInt = 24
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueARM_OpRsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x8 x y)
// cond:
// result: (SRA (SignExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARMSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
}
func rewriteValueARM_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpARMMOVBstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVHstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpARMMOVHstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && !is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && !is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)) {
break
}
v.reset(OpARMMOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (MOVFstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpARMMOVFstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (MOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpARMMOVDstore)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [0] _ mem)
// cond:
// result: mem
mem := v.Args[1]
v.reset(OpARMMOVBstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [2] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore ptr (MOVWconst [0]) mem)
for {
if v.AuxInt != 2 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpARMMOVHstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
v.reset(OpARMMOVBstore)
v.AuxInt = 1
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
return true
}
// match: (Zero [4] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore ptr (MOVWconst [0]) mem)
for {
if v.AuxInt != 4 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpARMMOVWstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [4] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [2] ptr (MOVWconst [0]) (MOVHstore [0] ptr (MOVWconst [0]) mem))
for {
if v.AuxInt != 4 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpARMMOVHstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVHstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpARMMOVBstore)
v.AuxInt = 3
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v3.AuxInt = 1
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v5.AuxInt = 0
v5.AddArg(ptr)
- v6 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v6.AuxInt = 0
v5.AddArg(v6)
v5.AddArg(mem)
v.reset(OpARMMOVBstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v1.AuxInt = 1
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARMMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARMMOVBstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [s] {t} ptr mem)
- // cond: s%4 == 0 && s > 4 && s <= 512 && t.(Type).Alignment()%4 == 0 && !config.noDuffDevice
+ // cond: s%4 == 0 && s > 4 && s <= 512 && t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice
// result: (DUFFZERO [4 * (128 - int64(s/4))] ptr (MOVWconst [0]) mem)
for {
s := v.AuxInt
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(s%4 == 0 && s > 4 && s <= 512 && t.(Type).Alignment()%4 == 0 && !config.noDuffDevice) {
+ if !(s%4 == 0 && s > 4 && s <= 512 && t.(*types.Type).Alignment()%4 == 0 && !config.noDuffDevice) {
break
}
v.reset(OpARMDUFFZERO)
v.AuxInt = 4 * (128 - int64(s/4))
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [s] {t} ptr mem)
- // cond: (s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0
- // result: (LoweredZero [t.(Type).Alignment()] ptr (ADDconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)]) (MOVWconst [0]) mem)
+ // cond: (s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0
+ // result: (LoweredZero [t.(*types.Type).Alignment()] ptr (ADDconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)]) (MOVWconst [0]) mem)
for {
s := v.AuxInt
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !((s > 512 || config.noDuffDevice) || t.(Type).Alignment()%4 != 0) {
+ if !((s > 512 || config.noDuffDevice) || t.(*types.Type).Alignment()%4 != 0) {
break
}
v.reset(OpARMLoweredZero)
- v.AuxInt = t.(Type).Alignment()
+ v.AuxInt = t.(*types.Type).Alignment()
v.AddArg(ptr)
v0 := b.NewValue0(v.Pos, OpARMADDconst, ptr.Type)
- v0.AuxInt = s - moveSize(t.(Type).Alignment(), config)
+ v0.AuxInt = s - moveSize(t.(*types.Type).Alignment(), config)
v0.AddArg(ptr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARMMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpARMMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
v.AddArg(mem)
func rewriteValueARM_OpZeromask_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zeromask x)
// cond:
- // result: (SRAconst (RSBshiftRL <types.Int32> x x [1]) [31])
+ // result: (SRAconst (RSBshiftRL <typ.Int32> x x [1]) [31])
for {
x := v.Args[0]
v.reset(OpARMSRAconst)
v.AuxInt = 31
- v0 := b.NewValue0(v.Pos, OpARMRSBshiftRL, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARMRSBshiftRL, typ.Int32)
v0.AuxInt = 1
v0.AddArg(x)
v0.AddArg(x)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockARMEQ:
// match: (EQ (FlagEQ) yes no)
_ = v
cond := b.Control
b.Kind = BlockARMNE
- v0 := b.NewValue0(v.Pos, OpARMCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARMCMPconst, types.TypeFlags)
v0.AuxInt = 0
v0.AddArg(cond)
b.SetControl(v0)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValueARM64(v *Value) bool {
switch v.Op {
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
y := v_0.Args[0]
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPshiftLL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPshiftLL, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v0.AddArg(y)
y := v_0.Args[0]
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPshiftRL, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPshiftRL, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v0.AddArg(y)
y := v_0.Args[0]
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPshiftRA, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPshiftRA, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v0.AddArg(y)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPWconst, types.TypeFlags)
v0.AuxInt = int64(int32(c))
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARM64SLLconst, x.Type)
v1.AuxInt = d
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARM64SRAconst, x.Type)
v1.AuxInt = d
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpARM64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v0.AuxInt = c
v1 := b.NewValue0(v.Pos, OpARM64SRLconst, x.Type)
v1.AuxInt = d
func rewriteValueARM64_OpBitLen64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitLen64 x)
// cond:
- // result: (SUB (MOVDconst [64]) (CLZ <types.Int> x))
+ // result: (SUB (MOVDconst [64]) (CLZ <typ.Int> x))
for {
x := v.Args[0]
v.reset(OpARM64SUB)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 64
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64CLZ, types.Int)
+ v1 := b.NewValue0(v.Pos, OpARM64CLZ, typ.Int)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueARM64_OpBitRev16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitRev16 x)
// cond:
- // result: (SRLconst [48] (RBIT <types.UInt64> x))
+ // result: (SRLconst [48] (RBIT <typ.UInt64> x))
for {
x := v.Args[0]
v.reset(OpARM64SRLconst)
v.AuxInt = 48
- v0 := b.NewValue0(v.Pos, OpARM64RBIT, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64RBIT, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValueARM64_OpBitRev8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitRev8 x)
// cond:
- // result: (SRLconst [56] (RBIT <types.UInt64> x))
+ // result: (SRLconst [56] (RBIT <typ.UInt64> x))
for {
x := v.Args[0]
v.reset(OpARM64SRLconst)
v.AuxInt = 56
- v0 := b.NewValue0(v.Pos, OpARM64RBIT, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64RBIT, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValueARM64_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (DIVW (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64DIVW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (UDIVW (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64UDIVW)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (DIVW (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64DIVW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (UDIVW (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64UDIVW)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpEq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq16 x y)
// cond:
// result: (Equal (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpEq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq8 x y)
// cond:
// result: (Equal (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpEqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqB x y)
// cond:
- // result: (XOR (MOVDconst [1]) (XOR <types.Bool> x y))
+ // result: (XOR (MOVDconst [1]) (XOR <typ.Bool> x y))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64XOR)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64XOR, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpARM64XOR, typ.Bool)
v1.AddArg(x)
v1.AddArg(y)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64Equal)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpGeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16 x y)
// cond:
// result: (GreaterEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpGeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16U x y)
// cond:
// result: (GreaterEqualU (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpGeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8 x y)
// cond:
// result: (GreaterEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpGeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8U x y)
// cond:
// result: (GreaterEqualU (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpGreater16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16 x y)
// cond:
// result: (GreaterThan (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpGreater16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16U x y)
// cond:
// result: (GreaterThanU (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpGreater8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8 x y)
// cond:
// result: (GreaterThan (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpGreater8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8U x y)
// cond:
// result: (GreaterThanU (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpHmul32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32 x y)
// cond:
- // result: (SRAconst (MULL <types.Int64> x y) [32])
+ // result: (SRAconst (MULL <typ.Int64> x y) [32])
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRAconst)
v.AuxInt = 32
- v0 := b.NewValue0(v.Pos, OpARM64MULL, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpARM64MULL, typ.Int64)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpHmul32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32u x y)
// cond:
- // result: (SRAconst (UMULL <types.UInt64> x y) [32])
+ // result: (SRAconst (UMULL <typ.UInt64> x y) [32])
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRAconst)
v.AuxInt = 32
- v0 := b.NewValue0(v.Pos, OpARM64UMULL, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64UMULL, typ.UInt64)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpARM64LessThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
for {
ptr := v.Args[0]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v0.AuxInt = 0
v0.AddArg(ptr)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpARM64LessEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
func rewriteValueARM64_OpLeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16 x y)
// cond:
// result: (LessEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpLeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16U x y)
// cond:
// result: (LessEqualU (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPS, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpLeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8 x y)
// cond:
// result: (LessEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpLeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8U x y)
// cond:
// result: (LessEqualU (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessEqualU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpLess16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16 x y)
// cond:
// result: (LessThan (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpLess16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16U x y)
// cond:
// result: (LessThanU (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPS, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpLess8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8 x y)
// cond:
// result: (LessThan (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThan)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpLess8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8U x y)
// cond:
// result: (LessThanU (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64LessThanU)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueARM64_OpLsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x16 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x32 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
v1 := b.NewValue0(v.Pos, OpConst64, t)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM64_OpLsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x8 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x16 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x32 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
v1 := b.NewValue0(v.Pos, OpConst64, t)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM64_OpLsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x8 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x16 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x32 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
v1 := b.NewValue0(v.Pos, OpConst64, t)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM64_OpLsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x8 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x16 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpLsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x32 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
v1 := b.NewValue0(v.Pos, OpConst64, t)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM64_OpLsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x8 <t> x y)
// cond:
// result: (CSELULT (SLL <t> x (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (MODW (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64MODW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (UMODW (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64UMODW)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (MODW (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64MODW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (UMODW (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64UMODW)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueARM64_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpARM64MOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpARM64MOVHstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVHUload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVHUload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpARM64MOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVWUload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVWUload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpARM64MOVDstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpARM64MOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVHUload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVHUload, typ.UInt16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpARM64MOVBstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, typ.UInt8)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVWUload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVWUload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpARM64MOVHstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVHUload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVHUload, typ.UInt16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVWUload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVWUload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpARM64MOVBstore)
v.AuxInt = 6
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVBUload, typ.UInt8)
v0.AuxInt = 6
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVHUload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVHUload, typ.UInt16)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64MOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARM64MOVWstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpARM64MOVWUload, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpARM64MOVWUload, typ.UInt32)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
v.reset(OpARM64MOVWstore)
v.AuxInt = 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVWUload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVWUload, typ.UInt32)
v0.AuxInt = 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [16] dst src mem)
// cond:
// result: (MOVDstore [8] dst (MOVDload [8] src mem) (MOVDstore dst (MOVDload src mem) mem))
v.reset(OpARM64MOVDstore)
v.AuxInt = 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v0.AuxInt = 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpARM64MOVDstore)
v.AuxInt = 16
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v0.AuxInt = 16
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v2.AuxInt = 8
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpARM64MOVDload, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpARM64MOVDload, typ.UInt64)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
v1.AuxInt = s - s%8
v1.AddArg(src)
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMove, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpMove, types.TypeMem)
v2.AuxInt = s - s%8
v2.AddArg(dst)
v2.AddArg(src)
func rewriteValueARM64_OpNeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq16 x y)
// cond:
// result: (NotEqual (CMPW (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPS, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPS, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64FCMPD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64FCMPD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpNeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq8 x y)
// cond:
// result: (NotEqual (CMPW (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpARM64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64NotEqual)
- v0 := b.NewValue0(v.Pos, OpARM64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpARM64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueARM64_OpNot_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Not x)
// cond:
// result: (XOR (MOVDconst [1]) x)
for {
x := v.Args[0]
v.reset(OpARM64XOR)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
v.AddArg(x)
func rewriteValueARM64_OpRsh16Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux16 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt16to64 x) (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh16Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux32 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt16to64 x) (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 x (MOVDconst [c]))
// cond: uint64(c) < 16
// result: (SRLconst (ZeroExt16to64 x) [c])
}
v.reset(OpARM64SRLconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
return true
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v.AddArg(v3)
func rewriteValueARM64_OpRsh16Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux8 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt16to64 x) (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x16 x y)
// cond:
// result: (SRA (SignExt16to64 x) (CSELULT <y.Type> (ZeroExt16to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt16to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x32 x y)
// cond:
// result: (SRA (SignExt16to64 x) (CSELULT <y.Type> (ZeroExt32to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt32to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 x (MOVDconst [c]))
// cond: uint64(c) < 16
// result: (SRAconst (SignExt16to64 x) [c])
}
v.reset(OpARM64SRAconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpARM64SRAconst)
v.AuxInt = 63
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
v2 := b.NewValue0(v.Pos, OpConst64, y.Type)
v2.AuxInt = 63
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v1.AddArg(v3)
func rewriteValueARM64_OpRsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x8 x y)
// cond:
// result: (SRA (SignExt16to64 x) (CSELULT <y.Type> (ZeroExt8to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt8to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh32Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux16 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt32to64 x) (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh32Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux32 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt32to64 x) (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh32Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux64 x (MOVDconst [c]))
// cond: uint64(c) < 32
// result: (SRLconst (ZeroExt32to64 x) [c])
}
v.reset(OpARM64SRLconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
return true
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v.AddArg(v3)
func rewriteValueARM64_OpRsh32Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux8 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt32to64 x) (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x16 x y)
// cond:
// result: (SRA (SignExt32to64 x) (CSELULT <y.Type> (ZeroExt16to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt16to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x32 x y)
// cond:
// result: (SRA (SignExt32to64 x) (CSELULT <y.Type> (ZeroExt32to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt32to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x64 x (MOVDconst [c]))
// cond: uint64(c) < 32
// result: (SRAconst (SignExt32to64 x) [c])
}
v.reset(OpARM64SRAconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpARM64SRAconst)
v.AuxInt = 63
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
v2 := b.NewValue0(v.Pos, OpConst64, y.Type)
v2.AuxInt = 63
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v1.AddArg(v3)
func rewriteValueARM64_OpRsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x8 x y)
// cond:
// result: (SRA (SignExt32to64 x) (CSELULT <y.Type> (ZeroExt8to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt8to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh64Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux16 <t> x y)
// cond:
// result: (CSELULT (SRL <t> x (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpRsh64Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux32 <t> x y)
// cond:
// result: (CSELULT (SRL <t> x (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
v1 := b.NewValue0(v.Pos, OpConst64, t)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueARM64_OpRsh64Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux8 <t> x y)
// cond:
// result: (CSELULT (SRL <t> x (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueARM64_OpRsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x16 x y)
// cond:
// result: (SRA x (CSELULT <y.Type> (ZeroExt16to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt16to64 y))))
v.reset(OpARM64SRA)
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v2 := b.NewValue0(v.Pos, OpConst64, y.Type)
v2.AuxInt = 63
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v0.AddArg(v3)
func rewriteValueARM64_OpRsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x32 x y)
// cond:
// result: (SRA x (CSELULT <y.Type> (ZeroExt32to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt32to64 y))))
v.reset(OpARM64SRA)
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v2 := b.NewValue0(v.Pos, OpConst64, y.Type)
v2.AuxInt = 63
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v0.AddArg(v3)
v1 := b.NewValue0(v.Pos, OpConst64, y.Type)
v1.AuxInt = 63
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v2.AuxInt = 64
v2.AddArg(y)
v0.AddArg(v2)
func rewriteValueARM64_OpRsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x8 x y)
// cond:
// result: (SRA x (CSELULT <y.Type> (ZeroExt8to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt8to64 y))))
v.reset(OpARM64SRA)
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v0.AddArg(v1)
v2 := b.NewValue0(v.Pos, OpConst64, y.Type)
v2.AuxInt = 63
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v0.AddArg(v3)
func rewriteValueARM64_OpRsh8Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux16 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt8to64 x) (ZeroExt16to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt16to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh8Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux32 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt8to64 x) (ZeroExt32to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt32to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 x (MOVDconst [c]))
// cond: uint64(c) < 8
// result: (SRLconst (ZeroExt8to64 x) [c])
}
v.reset(OpARM64SRLconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
return true
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v2 := b.NewValue0(v.Pos, OpConst64, t)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v.AddArg(v3)
func rewriteValueARM64_OpRsh8Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux8 <t> x y)
// cond:
// result: (CSELULT (SRL <t> (ZeroExt8to64 x) (ZeroExt8to64 y)) (Const64 <t> [0]) (CMPconst [64] (ZeroExt8to64 y)))
y := v.Args[1]
v.reset(OpARM64CSELULT)
v0 := b.NewValue0(v.Pos, OpARM64SRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpConst64, t)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueARM64_OpRsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x16 x y)
// cond:
// result: (SRA (SignExt8to64 x) (CSELULT <y.Type> (ZeroExt16to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt16to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x32 x y)
// cond:
// result: (SRA (SignExt8to64 x) (CSELULT <y.Type> (ZeroExt32to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt32to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueARM64_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 x (MOVDconst [c]))
// cond: uint64(c) < 8
// result: (SRAconst (SignExt8to64 x) [c])
}
v.reset(OpARM64SRAconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpARM64SRAconst)
v.AuxInt = 63
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
v2 := b.NewValue0(v.Pos, OpConst64, y.Type)
v2.AuxInt = 63
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v3.AuxInt = 64
v3.AddArg(y)
v1.AddArg(v3)
func rewriteValueARM64_OpRsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x8 x y)
// cond:
// result: (SRA (SignExt8to64 x) (CSELULT <y.Type> (ZeroExt8to64 y) (Const64 <y.Type> [63]) (CMPconst [64] (ZeroExt8to64 y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpARM64SRA)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpARM64CSELULT, y.Type)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
v3 := b.NewValue0(v.Pos, OpConst64, y.Type)
v3.AuxInt = 63
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpARM64CMPconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpARM64CMPconst, types.TypeFlags)
v4.AuxInt = 64
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
}
func rewriteValueARM64_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpARM64MOVBstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVHstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpARM64MOVHstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && !is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && !is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)) {
break
}
v.reset(OpARM64MOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && !is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type)
// result: (MOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && !is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type)) {
break
}
v.reset(OpARM64MOVDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (FMOVSstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpARM64FMOVSstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (FMOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpARM64FMOVDstore)
func rewriteValueARM64_OpZero_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [0] _ mem)
// cond:
// result: mem
mem := v.Args[1]
v.reset(OpARM64MOVBstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
mem := v.Args[1]
v.reset(OpARM64MOVHstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
mem := v.Args[1]
v.reset(OpARM64MOVWstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
mem := v.Args[1]
v.reset(OpARM64MOVDstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
v.reset(OpARM64MOVBstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, types.TypeMem)
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpARM64MOVBstore)
v.AuxInt = 4
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, types.TypeMem)
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpARM64MOVHstore)
v.AuxInt = 4
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVWstore, types.TypeMem)
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpARM64MOVBstore)
v.AuxInt = 6
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64MOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARM64MOVWstore, types.TypeMem)
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
v.reset(OpARM64MOVWstore)
v.AuxInt = 8
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [16] ptr mem)
// cond:
// result: (MOVDstore [8] ptr (MOVDconst [0]) (MOVDstore ptr (MOVDconst [0]) mem))
v.reset(OpARM64MOVDstore)
v.AuxInt = 8
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpARM64MOVDstore)
v.AuxInt = 16
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpARM64MOVDstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpARM64MOVDstore, types.TypeMem)
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpARM64MOVDconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpARM64MOVDconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
v0.AuxInt = s - s%8
v0.AddArg(ptr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpZero, types.TypeMem)
v1.AuxInt = s - s%8
v1.AddArg(ptr)
v1.AddArg(mem)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockARM64EQ:
// match: (EQ (CMPconst [0] x) yes no)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValueMIPS(v *Value) bool {
switch v.Op {
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicAnd8 ptr val mem)
// cond: !config.BigEndian
- // result: (LoweredAtomicAnd (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr) (OR <types.UInt32> (SLL <types.UInt32> (ZeroExt8to32 val) (SLLconst <types.UInt32> [3] (ANDconst <types.UInt32> [3] ptr))) (NORconst [0] <types.UInt32> (SLL <types.UInt32> (MOVWconst [0xff]) (SLLconst <types.UInt32> [3] (ANDconst <types.UInt32> [3] ptr))))) mem)
+ // result: (LoweredAtomicAnd (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr) (OR <typ.UInt32> (SLL <typ.UInt32> (ZeroExt8to32 val) (SLLconst <typ.UInt32> [3] (ANDconst <typ.UInt32> [3] ptr))) (NORconst [0] <typ.UInt32> (SLL <typ.UInt32> (MOVWconst [0xff]) (SLLconst <typ.UInt32> [3] (ANDconst <typ.UInt32> [3] ptr))))) mem)
for {
ptr := v.Args[0]
val := v.Args[1]
break
}
v.reset(OpMIPSLoweredAtomicAnd)
- v0 := b.NewValue0(v.Pos, OpMIPSAND, types.UInt32Ptr)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSAND, typ.UInt32Ptr)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = ^3
v0.AddArg(v1)
v0.AddArg(ptr)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSOR, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpMIPSSLL, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSOR, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSSLL, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(val)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v5.AuxInt = 3
- v6 := b.NewValue0(v.Pos, OpMIPSANDconst, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpMIPSANDconst, typ.UInt32)
v6.AuxInt = 3
v6.AddArg(ptr)
v5.AddArg(v6)
v3.AddArg(v5)
v2.AddArg(v3)
- v7 := b.NewValue0(v.Pos, OpMIPSNORconst, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpMIPSNORconst, typ.UInt32)
v7.AuxInt = 0
- v8 := b.NewValue0(v.Pos, OpMIPSSLL, types.UInt32)
- v9 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v8 := b.NewValue0(v.Pos, OpMIPSSLL, typ.UInt32)
+ v9 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v9.AuxInt = 0xff
v8.AddArg(v9)
- v10 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v10 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v10.AuxInt = 3
- v11 := b.NewValue0(v.Pos, OpMIPSANDconst, types.UInt32)
+ v11 := b.NewValue0(v.Pos, OpMIPSANDconst, typ.UInt32)
v11.AuxInt = 3
v11.AddArg(ptr)
v10.AddArg(v11)
}
// match: (AtomicAnd8 ptr val mem)
// cond: config.BigEndian
- // result: (LoweredAtomicAnd (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr) (OR <types.UInt32> (SLL <types.UInt32> (ZeroExt8to32 val) (SLLconst <types.UInt32> [3] (ANDconst <types.UInt32> [3] (XORconst <types.UInt32> [3] ptr)))) (NORconst [0] <types.UInt32> (SLL <types.UInt32> (MOVWconst [0xff]) (SLLconst <types.UInt32> [3] (ANDconst <types.UInt32> [3] (XORconst <types.UInt32> [3] ptr)))))) mem)
+ // result: (LoweredAtomicAnd (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr) (OR <typ.UInt32> (SLL <typ.UInt32> (ZeroExt8to32 val) (SLLconst <typ.UInt32> [3] (ANDconst <typ.UInt32> [3] (XORconst <typ.UInt32> [3] ptr)))) (NORconst [0] <typ.UInt32> (SLL <typ.UInt32> (MOVWconst [0xff]) (SLLconst <typ.UInt32> [3] (ANDconst <typ.UInt32> [3] (XORconst <typ.UInt32> [3] ptr)))))) mem)
for {
ptr := v.Args[0]
val := v.Args[1]
break
}
v.reset(OpMIPSLoweredAtomicAnd)
- v0 := b.NewValue0(v.Pos, OpMIPSAND, types.UInt32Ptr)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSAND, typ.UInt32Ptr)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = ^3
v0.AddArg(v1)
v0.AddArg(ptr)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSOR, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpMIPSSLL, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSOR, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSSLL, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(val)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v5.AuxInt = 3
- v6 := b.NewValue0(v.Pos, OpMIPSANDconst, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpMIPSANDconst, typ.UInt32)
v6.AuxInt = 3
- v7 := b.NewValue0(v.Pos, OpMIPSXORconst, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpMIPSXORconst, typ.UInt32)
v7.AuxInt = 3
v7.AddArg(ptr)
v6.AddArg(v7)
v5.AddArg(v6)
v3.AddArg(v5)
v2.AddArg(v3)
- v8 := b.NewValue0(v.Pos, OpMIPSNORconst, types.UInt32)
+ v8 := b.NewValue0(v.Pos, OpMIPSNORconst, typ.UInt32)
v8.AuxInt = 0
- v9 := b.NewValue0(v.Pos, OpMIPSSLL, types.UInt32)
- v10 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpMIPSSLL, typ.UInt32)
+ v10 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v10.AuxInt = 0xff
v9.AddArg(v10)
- v11 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v11 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v11.AuxInt = 3
- v12 := b.NewValue0(v.Pos, OpMIPSANDconst, types.UInt32)
+ v12 := b.NewValue0(v.Pos, OpMIPSANDconst, typ.UInt32)
v12.AuxInt = 3
- v13 := b.NewValue0(v.Pos, OpMIPSXORconst, types.UInt32)
+ v13 := b.NewValue0(v.Pos, OpMIPSXORconst, typ.UInt32)
v13.AuxInt = 3
v13.AddArg(ptr)
v12.AddArg(v13)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicOr8 ptr val mem)
// cond: !config.BigEndian
- // result: (LoweredAtomicOr (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr) (SLL <types.UInt32> (ZeroExt8to32 val) (SLLconst <types.UInt32> [3] (ANDconst <types.UInt32> [3] ptr))) mem)
+ // result: (LoweredAtomicOr (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr) (SLL <typ.UInt32> (ZeroExt8to32 val) (SLLconst <typ.UInt32> [3] (ANDconst <typ.UInt32> [3] ptr))) mem)
for {
ptr := v.Args[0]
val := v.Args[1]
break
}
v.reset(OpMIPSLoweredAtomicOr)
- v0 := b.NewValue0(v.Pos, OpMIPSAND, types.UInt32Ptr)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSAND, typ.UInt32Ptr)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = ^3
v0.AddArg(v1)
v0.AddArg(ptr)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSSLL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSSLL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v3.AddArg(val)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v4.AuxInt = 3
- v5 := b.NewValue0(v.Pos, OpMIPSANDconst, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMIPSANDconst, typ.UInt32)
v5.AuxInt = 3
v5.AddArg(ptr)
v4.AddArg(v5)
}
// match: (AtomicOr8 ptr val mem)
// cond: config.BigEndian
- // result: (LoweredAtomicOr (AND <types.UInt32Ptr> (MOVWconst [^3]) ptr) (SLL <types.UInt32> (ZeroExt8to32 val) (SLLconst <types.UInt32> [3] (ANDconst <types.UInt32> [3] (XORconst <types.UInt32> [3] ptr)))) mem)
+ // result: (LoweredAtomicOr (AND <typ.UInt32Ptr> (MOVWconst [^3]) ptr) (SLL <typ.UInt32> (ZeroExt8to32 val) (SLLconst <typ.UInt32> [3] (ANDconst <typ.UInt32> [3] (XORconst <typ.UInt32> [3] ptr)))) mem)
for {
ptr := v.Args[0]
val := v.Args[1]
break
}
v.reset(OpMIPSLoweredAtomicOr)
- v0 := b.NewValue0(v.Pos, OpMIPSAND, types.UInt32Ptr)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSAND, typ.UInt32Ptr)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = ^3
v0.AddArg(v1)
v0.AddArg(ptr)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSSLL, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSSLL, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v3.AddArg(val)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v4.AuxInt = 3
- v5 := b.NewValue0(v.Pos, OpMIPSANDconst, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMIPSANDconst, typ.UInt32)
v5.AuxInt = 3
- v6 := b.NewValue0(v.Pos, OpMIPSXORconst, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpMIPSXORconst, typ.UInt32)
v6.AuxInt = 3
v6.AddArg(ptr)
v5.AddArg(v6)
func rewriteValueMIPS_OpBitLen32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitLen32 <t> x)
// cond:
// result: (SUB (MOVWconst [32]) (CLZ <t> x))
t := v.Type
x := v.Args[0]
v.reset(OpMIPSSUB)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 32
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPSCLZ, t)
func rewriteValueMIPS_OpCtz32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Ctz32 <t> x)
// cond:
// result: (SUB (MOVWconst [32]) (CLZ <t> (SUBconst <t> [1] (AND <t> x (NEG <t> x)))))
t := v.Type
x := v.Args[0]
v.reset(OpMIPSSUB)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 32
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPSCLZ, t)
func rewriteValueMIPS_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (Select1 (DIV (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPSDIV, MakeTuple(types.Int32, types.Int32))
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIV, types.NewTuple(typ.Int32, typ.Int32))
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (Select1 (DIVU (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPSDIVU, MakeTuple(types.UInt32, types.UInt32))
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIVU, types.NewTuple(typ.UInt32, typ.UInt32))
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpDiv32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32 x y)
// cond:
// result: (Select1 (DIV x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPSDIV, MakeTuple(types.Int32, types.Int32))
+ v0 := b.NewValue0(v.Pos, OpMIPSDIV, types.NewTuple(typ.Int32, typ.Int32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpDiv32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32u x y)
// cond:
// result: (Select1 (DIVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPSDIVU, MakeTuple(types.UInt32, types.UInt32))
+ v0 := b.NewValue0(v.Pos, OpMIPSDIVU, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (Select1 (DIV (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPSDIV, MakeTuple(types.Int32, types.Int32))
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIV, types.NewTuple(typ.Int32, typ.Int32))
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (Select1 (DIVU (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPSDIVU, MakeTuple(types.UInt32, types.UInt32))
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIVU, types.NewTuple(typ.UInt32, typ.UInt32))
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpEq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq16 x y)
// cond:
// result: (SGTUconst [1] (XOR (ZeroExt16to32 x) (ZeroExt16to32 y)))
y := v.Args[1]
v.reset(OpMIPSSGTUconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpEq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq32 x y)
// cond:
// result: (SGTUconst [1] (XOR x y))
y := v.Args[1]
v.reset(OpMIPSSGTUconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPEQF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPEQF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPEQD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPEQD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpEq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq8 x y)
// cond:
// result: (SGTUconst [1] (XOR (ZeroExt8to32 x) (ZeroExt8to32 y)))
y := v.Args[1]
v.reset(OpMIPSSGTUconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpEqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqB x y)
// cond:
- // result: (XORconst [1] (XOR <types.Bool> x y))
+ // result: (XORconst [1] (XOR <typ.Bool> x y))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.Bool)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpEqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqPtr x y)
// cond:
// result: (SGTUconst [1] (XOR x y))
y := v.Args[1]
v.reset(OpMIPSSGTUconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpGeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16 x y)
// cond:
// result: (XORconst [1] (SGT (SignExt16to32 y) (SignExt16to32 x)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGT, types.Bool)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGT, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(x)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpGeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16U x y)
// cond:
// result: (XORconst [1] (SGTU (ZeroExt16to32 y) (ZeroExt16to32 x)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(x)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpGeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32 x y)
// cond:
// result: (XORconst [1] (SGT y x))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGT, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGT, typ.Bool)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGEF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGEF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpGeq32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32U x y)
// cond:
// result: (XORconst [1] (SGTU y x))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGED, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGED, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpGeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8 x y)
// cond:
// result: (XORconst [1] (SGT (SignExt8to32 y) (SignExt8to32 x)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGT, types.Bool)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGT, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(x)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpGeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8U x y)
// cond:
// result: (XORconst [1] (SGTU (ZeroExt8to32 y) (ZeroExt8to32 x)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(x)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpGreater16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16 x y)
// cond:
// result: (SGT (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGT)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS_OpGreater16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16U x y)
// cond:
// result: (SGTU (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGTF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGTF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGTD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGTD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpGreater8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8 x y)
// cond:
// result: (SGT (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGT)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS_OpGreater8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8U x y)
// cond:
// result: (SGTU (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS_OpHmul32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32 x y)
// cond:
// result: (Select0 (MULT x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSMULT, MakeTuple(types.Int32, types.Int32))
+ v0 := b.NewValue0(v.Pos, OpMIPSMULT, types.NewTuple(typ.Int32, typ.Int32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpHmul32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32u x y)
// cond:
// result: (Select0 (MULTU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSMULTU, MakeTuple(types.UInt32, types.UInt32))
+ v0 := b.NewValue0(v.Pos, OpMIPSMULTU, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpIsNonNil_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsNonNil ptr)
// cond:
// result: (SGTU ptr (MOVWconst [0]))
ptr := v.Args[0]
v.reset(OpMIPSSGTU)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
return true
func rewriteValueMIPS_OpIsSliceInBounds_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsSliceInBounds idx len)
// cond:
// result: (XORconst [1] (SGTU idx len))
len := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
func rewriteValueMIPS_OpLeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16 x y)
// cond:
// result: (XORconst [1] (SGT (SignExt16to32 x) (SignExt16to32 y)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGT, types.Bool)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGT, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpLeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16U x y)
// cond:
// result: (XORconst [1] (SGTU (ZeroExt16to32 x) (ZeroExt16to32 y)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpLeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32 x y)
// cond:
// result: (XORconst [1] (SGT x y))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGT, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGT, typ.Bool)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGEF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGEF, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpLeq32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32U x y)
// cond:
// result: (XORconst [1] (SGTU x y))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGED, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGED, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpLeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8 x y)
// cond:
// result: (XORconst [1] (SGT (SignExt8to32 x) (SignExt8to32 y)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGT, types.Bool)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGT, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpLeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8U x y)
// cond:
// result: (XORconst [1] (SGTU (ZeroExt8to32 x) (ZeroExt8to32 y)))
y := v.Args[1]
v.reset(OpMIPSXORconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpLess16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16 x y)
// cond:
// result: (SGT (SignExt16to32 y) (SignExt16to32 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGT)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS_OpLess16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16U x y)
// cond:
// result: (SGTU (ZeroExt16to32 y) (ZeroExt16to32 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGTF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGTF, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPGTD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPGTD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpLess8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8 x y)
// cond:
// result: (SGT (SignExt8to32 y) (SignExt8to32 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGT)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS_OpLess8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8U x y)
// cond:
// result: (SGTU (ZeroExt8to32 y) (ZeroExt8to32 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS_OpLsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x16 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpLsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x32 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x y) (MOVWconst [0]) (SGTUconst [32] y))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v2.AuxInt = 32
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueMIPS_OpLsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x8 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpLsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x16 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpLsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x32 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x y) (MOVWconst [0]) (SGTUconst [32] y))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v2.AuxInt = 32
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueMIPS_OpLsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x8 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpLsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x16 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpLsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x32 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x y) (MOVWconst [0]) (SGTUconst [32] y))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v2.AuxInt = 32
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueMIPS_OpLsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x8 <t> x y)
// cond:
// result: (CMOVZ (SLL <t> x (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSLL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (Select0 (DIV (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSDIV, MakeTuple(types.Int32, types.Int32))
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIV, types.NewTuple(typ.Int32, typ.Int32))
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (Select0 (DIVU (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSDIVU, MakeTuple(types.UInt32, types.UInt32))
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIVU, types.NewTuple(typ.UInt32, typ.UInt32))
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpMod32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32 x y)
// cond:
// result: (Select0 (DIV x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSDIV, MakeTuple(types.Int32, types.Int32))
+ v0 := b.NewValue0(v.Pos, OpMIPSDIV, types.NewTuple(typ.Int32, typ.Int32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpMod32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32u x y)
// cond:
// result: (Select0 (DIVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSDIVU, MakeTuple(types.UInt32, types.UInt32))
+ v0 := b.NewValue0(v.Pos, OpMIPSDIVU, types.NewTuple(typ.UInt32, typ.UInt32))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (Select0 (DIV (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSDIV, MakeTuple(types.Int32, types.Int32))
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIV, types.NewTuple(typ.Int32, typ.Int32))
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (Select0 (DIVU (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPSDIVU, MakeTuple(types.UInt32, types.UInt32))
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSDIVU, types.NewTuple(typ.UInt32, typ.UInt32))
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpMIPSMOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [2] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore dst (MOVHUload src mem) mem)
for {
if v.AuxInt != 2 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVHUload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVHUload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpMIPSMOVBstore)
v.AuxInt = 1
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v0.AuxInt = 1
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
return true
}
// match: (Move [4] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore dst (MOVWload src mem) mem)
for {
if v.AuxInt != 4 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [4] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [2] dst (MOVHUload [2] src mem) (MOVHstore dst (MOVHUload src mem) mem))
for {
if v.AuxInt != 4 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVHUload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVHUload, typ.UInt16)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVHUload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVHUload, typ.UInt16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpMIPSMOVBstore)
v.AuxInt = 3
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v0.AuxInt = 3
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v2.AuxInt = 2
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v3.AuxInt = 1
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v4.AuxInt = 1
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v5.AddArg(dst)
- v6 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v6 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v6.AddArg(src)
v6.AddArg(mem)
v5.AddArg(v6)
v.reset(OpMIPSMOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v1.AuxInt = 1
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v2.AuxInt = 1
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVBUload, types.UInt8)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVBUload, typ.UInt8)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [8] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [4] dst (MOVWload [4] src mem) (MOVWstore dst (MOVWload src mem) mem))
for {
if v.AuxInt != 8 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
return true
}
// match: (Move [8] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [6] dst (MOVHload [6] src mem) (MOVHstore [4] dst (MOVHload [4] src mem) (MOVHstore [2] dst (MOVHload [2] src mem) (MOVHstore dst (MOVHload src mem) mem))))
for {
if v.AuxInt != 8 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AuxInt = 6
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v0.AuxInt = 6
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v3.AuxInt = 2
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v4.AuxInt = 2
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v5.AddArg(dst)
- v6 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v6 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v6.AddArg(src)
v6.AddArg(mem)
v5.AddArg(v6)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [6] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [4] dst (MOVHload [4] src mem) (MOVHstore [2] dst (MOVHload [2] src mem) (MOVHstore dst (MOVHload src mem) mem)))
for {
if v.AuxInt != 6 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v2.AuxInt = 2
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVHload, types.Int16)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVHload, typ.Int16)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [12] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [8] dst (MOVWload [8] src mem) (MOVWstore [4] dst (MOVWload [4] src mem) (MOVWstore dst (MOVWload src mem) mem)))
for {
if v.AuxInt != 12 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AuxInt = 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v0.AuxInt = 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [16] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [12] dst (MOVWload [12] src mem) (MOVWstore [8] dst (MOVWload [8] src mem) (MOVWstore [4] dst (MOVWload [4] src mem) (MOVWstore dst (MOVWload src mem) mem))))
for {
if v.AuxInt != 16 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AuxInt = 12
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v0.AuxInt = 12
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v2.AuxInt = 8
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v3.AuxInt = 4
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v4.AuxInt = 4
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v5.AddArg(dst)
- v6 := b.NewValue0(v.Pos, OpMIPSMOVWload, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpMIPSMOVWload, typ.UInt32)
v6.AddArg(src)
v6.AddArg(mem)
v5.AddArg(v6)
return true
}
// match: (Move [s] {t} dst src mem)
- // cond: (s > 16 || t.(Type).Alignment()%4 != 0)
- // result: (LoweredMove [t.(Type).Alignment()] dst src (ADDconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)]) mem)
+ // cond: (s > 16 || t.(*types.Type).Alignment()%4 != 0)
+ // result: (LoweredMove [t.(*types.Type).Alignment()] dst src (ADDconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)]) mem)
for {
s := v.AuxInt
t := v.Aux
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(s > 16 || t.(Type).Alignment()%4 != 0) {
+ if !(s > 16 || t.(*types.Type).Alignment()%4 != 0) {
break
}
v.reset(OpMIPSLoweredMove)
- v.AuxInt = t.(Type).Alignment()
+ v.AuxInt = t.(*types.Type).Alignment()
v.AddArg(dst)
v.AddArg(src)
v0 := b.NewValue0(v.Pos, OpMIPSADDconst, src.Type)
- v0.AuxInt = s - moveSize(t.(Type).Alignment(), config)
+ v0.AuxInt = s - moveSize(t.(*types.Type).Alignment(), config)
v0.AddArg(src)
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueMIPS_OpNeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq16 x y)
// cond:
// result: (SGTU (XOR (ZeroExt16to32 x) (ZeroExt16to32 y)) (MOVWconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = 0
v.AddArg(v3)
return true
func rewriteValueMIPS_OpNeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq32 x y)
// cond:
// result: (SGTU (XOR x y) (MOVWconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagFalse)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPEQF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPEQF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSFPFlagFalse)
- v0 := b.NewValue0(v.Pos, OpMIPSCMPEQD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMPEQD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS_OpNeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq8 x y)
// cond:
// result: (SGTU (XOR (ZeroExt8to32 x) (ZeroExt8to32 y)) (MOVWconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = 0
v.AddArg(v3)
return true
func rewriteValueMIPS_OpNeqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NeqPtr x y)
// cond:
// result: (SGTU (XOR x y) (MOVWconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSGTU)
- v0 := b.NewValue0(v.Pos, OpMIPSXOR, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSXOR, typ.UInt32)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
return true
func rewriteValueMIPS_OpRsh16Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux16 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> (ZeroExt16to32 x) (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
y := v.Args[1]
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS_OpRsh16Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux32 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> (ZeroExt16to32 x) y) (MOVWconst [0]) (SGTUconst [32] y))
y := v.Args[1]
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
v3.AddArg(y)
v.AddArg(v3)
func rewriteValueMIPS_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 x (Const64 [c]))
// cond: uint32(c) < 16
- // result: (SRLconst (SLLconst <types.UInt32> x [16]) [c+16])
+ // result: (SRLconst (SLLconst <typ.UInt32> x [16]) [c+16])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpMIPSSRLconst)
v.AuxInt = c + 16
- v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v0.AuxInt = 16
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpRsh16Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux8 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> (ZeroExt16to32 x) (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
y := v.Args[1]
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS_OpRsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x16 x y)
// cond:
- // result: (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
+ // result: (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = -1
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueMIPS_OpRsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x32 x y)
// cond:
- // result: (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
+ // result: (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = -1
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
v3.AddArg(y)
v1.AddArg(v3)
func rewriteValueMIPS_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 x (Const64 [c]))
// cond: uint32(c) < 16
- // result: (SRAconst (SLLconst <types.UInt32> x [16]) [c+16])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [16]) [c+16])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpMIPSSRAconst)
v.AuxInt = c + 16
- v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v0.AuxInt = 16
v0.AddArg(x)
v.AddArg(v0)
}
// match: (Rsh16x64 x (Const64 [c]))
// cond: uint32(c) >= 16
- // result: (SRAconst (SLLconst <types.UInt32> x [16]) [31])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [16]) [31])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpMIPSSRAconst)
v.AuxInt = 31
- v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v0.AuxInt = 16
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpRsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x8 x y)
// cond:
- // result: (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
+ // result: (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = -1
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueMIPS_OpRsh32Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux16 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> x (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpRsh32Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux32 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> x y) (MOVWconst [0]) (SGTUconst [32] y))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v2.AuxInt = 32
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueMIPS_OpRsh32Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux8 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> x (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v.AddArg(v3)
func rewriteValueMIPS_OpRsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x16 x y)
// cond:
- // result: (SRA x ( CMOVZ <types.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
+ // result: (SRA x ( CMOVZ <typ.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = -1
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v0.AddArg(v3)
func rewriteValueMIPS_OpRsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x32 x y)
// cond:
- // result: (SRA x ( CMOVZ <types.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
+ // result: (SRA x ( CMOVZ <typ.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = -1
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v2 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v2.AuxInt = 32
v2.AddArg(y)
v0.AddArg(v2)
func rewriteValueMIPS_OpRsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x8 x y)
// cond:
- // result: (SRA x ( CMOVZ <types.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
+ // result: (SRA x ( CMOVZ <typ.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = -1
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
- v4 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v4.AddArg(y)
v3.AddArg(v4)
v0.AddArg(v3)
func rewriteValueMIPS_OpRsh8Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux16 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> (ZeroExt8to32 x) (ZeroExt16to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt16to32 y)))
y := v.Args[1]
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS_OpRsh8Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux32 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> (ZeroExt8to32 x) y) (MOVWconst [0]) (SGTUconst [32] y))
y := v.Args[1]
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
v3.AddArg(y)
v.AddArg(v3)
func rewriteValueMIPS_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 x (Const64 [c]))
// cond: uint32(c) < 8
- // result: (SRLconst (SLLconst <types.UInt32> x [24]) [c+24])
+ // result: (SRLconst (SLLconst <typ.UInt32> x [24]) [c+24])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpMIPSSRLconst)
v.AuxInt = c + 24
- v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v0.AuxInt = 24
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpRsh8Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux8 <t> x y)
// cond:
// result: (CMOVZ (SRL <t> (ZeroExt8to32 x) (ZeroExt8to32 y) ) (MOVWconst [0]) (SGTUconst [32] (ZeroExt8to32 y)))
y := v.Args[1]
v.reset(OpMIPSCMOVZ)
v0 := b.NewValue0(v.Pos, OpMIPSSRL, t)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = 0
v.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS_OpRsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x16 x y)
// cond:
- // result: (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
+ // result: (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt16to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt16to32 y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = -1
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueMIPS_OpRsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x32 x y)
// cond:
- // result: (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
+ // result: (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> y (MOVWconst [-1]) (SGTUconst [32] y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = -1
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v3.AuxInt = 32
v3.AddArg(y)
v1.AddArg(v3)
func rewriteValueMIPS_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 x (Const64 [c]))
// cond: uint32(c) < 8
- // result: (SRAconst (SLLconst <types.UInt32> x [24]) [c+24])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [24]) [c+24])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpMIPSSRAconst)
v.AuxInt = c + 24
- v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v0.AuxInt = 24
v0.AddArg(x)
v.AddArg(v0)
}
// match: (Rsh8x64 x (Const64 [c]))
// cond: uint32(c) >= 8
- // result: (SRAconst (SLLconst <types.UInt32> x [24]) [31])
+ // result: (SRAconst (SLLconst <typ.UInt32> x [24]) [31])
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpMIPSSRAconst)
v.AuxInt = 31
- v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSSLLconst, typ.UInt32)
v0.AuxInt = 24
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS_OpRsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x8 x y)
// cond:
- // result: (SRA (SignExt16to32 x) ( CMOVZ <types.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
+ // result: (SRA (SignExt16to32 x) ( CMOVZ <typ.UInt32> (ZeroExt8to32 y) (MOVWconst [-1]) (SGTUconst [32] (ZeroExt8to32 y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPSSRA)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSCMOVZ, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v3.AuxInt = -1
v1.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, types.Bool)
+ v4 := b.NewValue0(v.Pos, OpMIPSSGTUconst, typ.Bool)
v4.AuxInt = 32
- v5 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v5.AddArg(y)
v4.AddArg(v5)
v1.AddArg(v4)
func rewriteValueMIPS_OpSelect0_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Select0 (Add32carry <t> x y))
// cond:
// result: (ADD <t.FieldType(0)> x y)
v0.AuxInt = -1
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
v.AddArg(x)
v0.AuxInt = -1
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v.AddArg(v1)
v.AddArg(x)
func rewriteValueMIPS_OpSelect1_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Select1 (Add32carry <t> x y))
// cond:
- // result: (SGTU <types.Bool> x (ADD <t.FieldType(0)> x y))
+ // result: (SGTU <typ.Bool> x (ADD <t.FieldType(0)> x y))
for {
v_0 := v.Args[0]
if v_0.Op != OpAdd32carry {
x := v_0.Args[0]
y := v_0.Args[1]
v.reset(OpMIPSSGTU)
- v.Type = types.Bool
+ v.Type = typ.Bool
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpMIPSADD, t.FieldType(0))
v0.AddArg(x)
}
// match: (Select1 (Sub32carry <t> x y))
// cond:
- // result: (SGTU <types.Bool> (SUB <t.FieldType(0)> x y) x)
+ // result: (SGTU <typ.Bool> (SUB <t.FieldType(0)> x y) x)
for {
v_0 := v.Args[0]
if v_0.Op != OpSub32carry {
x := v_0.Args[0]
y := v_0.Args[1]
v.reset(OpMIPSSGTU)
- v.Type = types.Bool
+ v.Type = typ.Bool
v0 := b.NewValue0(v.Pos, OpMIPSSUB, t.FieldType(0))
v0.AddArg(x)
v0.AddArg(y)
}
func rewriteValueMIPS_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpMIPSMOVBstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVHstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpMIPSMOVHstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && !is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && !is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)) {
break
}
v.reset(OpMIPSMOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (MOVFstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpMIPSMOVFstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (MOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpMIPSMOVDstore)
func rewriteValueMIPS_OpZero_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [0] _ mem)
// cond:
// result: mem
mem := v.Args[1]
v.reset(OpMIPSMOVBstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [2] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore ptr (MOVWconst [0]) mem)
for {
if v.AuxInt != 2 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
v.reset(OpMIPSMOVBstore)
v.AuxInt = 1
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
return true
}
// match: (Zero [4] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore ptr (MOVWconst [0]) mem)
for {
if v.AuxInt != 4 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [4] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [2] ptr (MOVWconst [0]) (MOVHstore [0] ptr (MOVWconst [0]) mem))
for {
if v.AuxInt != 4 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpMIPSMOVBstore)
v.AuxInt = 3
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v3.AuxInt = 1
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v5.AuxInt = 0
v5.AddArg(ptr)
- v6 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v6.AuxInt = 0
v5.AddArg(v6)
v5.AddArg(mem)
v.reset(OpMIPSMOVBstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v1.AuxInt = 1
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVBstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [6] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [4] ptr (MOVWconst [0]) (MOVHstore [2] ptr (MOVWconst [0]) (MOVHstore [0] ptr (MOVWconst [0]) mem)))
for {
if v.AuxInt != 6 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPSMOVHstore)
v.AuxInt = 4
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVHstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [8] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [4] ptr (MOVWconst [0]) (MOVWstore [0] ptr (MOVWconst [0]) mem))
for {
if v.AuxInt != 8 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AuxInt = 4
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [12] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [8] ptr (MOVWconst [0]) (MOVWstore [4] ptr (MOVWconst [0]) (MOVWstore [0] ptr (MOVWconst [0]) mem)))
for {
if v.AuxInt != 12 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AuxInt = 8
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [16] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [12] ptr (MOVWconst [0]) (MOVWstore [8] ptr (MOVWconst [0]) (MOVWstore [4] ptr (MOVWconst [0]) (MOVWstore [0] ptr (MOVWconst [0]) mem))))
for {
if v.AuxInt != 16 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPSMOVWstore)
v.AuxInt = 12
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v3.AuxInt = 4
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v4.AuxInt = 0
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPSMOVWstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPSMOVWstore, types.TypeMem)
v5.AuxInt = 0
v5.AddArg(ptr)
- v6 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v6.AuxInt = 0
v5.AddArg(v6)
v5.AddArg(mem)
return true
}
// match: (Zero [s] {t} ptr mem)
- // cond: (s > 16 || t.(Type).Alignment()%4 != 0)
- // result: (LoweredZero [t.(Type).Alignment()] ptr (ADDconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)]) mem)
+ // cond: (s > 16 || t.(*types.Type).Alignment()%4 != 0)
+ // result: (LoweredZero [t.(*types.Type).Alignment()] ptr (ADDconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)]) mem)
for {
s := v.AuxInt
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(s > 16 || t.(Type).Alignment()%4 != 0) {
+ if !(s > 16 || t.(*types.Type).Alignment()%4 != 0) {
break
}
v.reset(OpMIPSLoweredZero)
- v.AuxInt = t.(Type).Alignment()
+ v.AuxInt = t.(*types.Type).Alignment()
v.AddArg(ptr)
v0 := b.NewValue0(v.Pos, OpMIPSADDconst, ptr.Type)
- v0.AuxInt = s - moveSize(t.(Type).Alignment(), config)
+ v0.AuxInt = s - moveSize(t.(*types.Type).Alignment(), config)
v0.AddArg(ptr)
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueMIPS_OpZeromask_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zeromask x)
// cond:
// result: (NEG (SGTU x (MOVWconst [0])))
for {
x := v.Args[0]
v.reset(OpMIPSNEG)
- v0 := b.NewValue0(v.Pos, OpMIPSSGTU, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpMIPSSGTU, typ.Bool)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMIPSMOVWconst, typ.UInt32)
v1.AuxInt = 0
v0.AddArg(v1)
v.AddArg(v0)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockMIPSEQ:
// match: (EQ (FPFlagTrue cmp) yes no)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValueMIPS64(v *Value) bool {
switch v.Op {
func rewriteValueMIPS64_OpCom16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Com16 x)
// cond:
// result: (NOR (MOVVconst [0]) x)
for {
x := v.Args[0]
v.reset(OpMIPS64NOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(x)
func rewriteValueMIPS64_OpCom32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Com32 x)
// cond:
// result: (NOR (MOVVconst [0]) x)
for {
x := v.Args[0]
v.reset(OpMIPS64NOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(x)
func rewriteValueMIPS64_OpCom64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Com64 x)
// cond:
// result: (NOR (MOVVconst [0]) x)
for {
x := v.Args[0]
v.reset(OpMIPS64NOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(x)
func rewriteValueMIPS64_OpCom8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Com8 x)
// cond:
// result: (NOR (MOVVconst [0]) x)
for {
x := v.Args[0]
v.reset(OpMIPS64NOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(x)
func rewriteValueMIPS64_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (Select1 (DIVV (SignExt16to64 x) (SignExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
- v1 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
+ v1 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (Select1 (DIVVU (ZeroExt16to64 x) (ZeroExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32 x y)
// cond:
// result: (Select1 (DIVV (SignExt32to64 x) (SignExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
- v1 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
+ v1 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32u x y)
// cond:
// result: (Select1 (DIVVU (ZeroExt32to64 x) (ZeroExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div64 x y)
// cond:
// result: (Select1 (DIVV x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv64u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div64u x y)
// cond:
// result: (Select1 (DIVVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (Select1 (DIVV (SignExt8to64 x) (SignExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
- v1 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
+ v1 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (Select1 (DIVVU (ZeroExt8to64 x) (ZeroExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpEq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq16 x y)
// cond:
// result: (SGTU (MOVVconst [1]) (XOR (ZeroExt16to64 x) (ZeroExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpEq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq32 x y)
// cond:
// result: (SGTU (MOVVconst [1]) (XOR (ZeroExt32to64 x) (ZeroExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpEq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq64 x y)
// cond:
// result: (SGTU (MOVVconst [1]) (XOR x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
v1.AddArg(x)
v1.AddArg(y)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpEq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq8 x y)
// cond:
// result: (SGTU (MOVVconst [1]) (XOR (ZeroExt8to64 x) (ZeroExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpEqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqB x y)
// cond:
- // result: (XOR (MOVVconst [1]) (XOR <types.Bool> x y))
+ // result: (XOR (MOVVconst [1]) (XOR <typ.Bool> x y))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64XOR, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.Bool)
v1.AddArg(x)
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueMIPS64_OpEqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqPtr x y)
// cond:
// result: (SGTU (MOVVconst [1]) (XOR x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
v1.AddArg(x)
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueMIPS64_OpGeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT (SignExt16to64 y) (SignExt16to64 x)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
- v2 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v3.AddArg(x)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpGeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU (ZeroExt16to64 y) (ZeroExt16to64 x)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(x)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpGeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT (SignExt32to64 y) (SignExt32to64 x)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
- v2 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v3.AddArg(x)
v1.AddArg(v3)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGEF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGEF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpGeq32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU (ZeroExt32to64 y) (ZeroExt32to64 x)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(x)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpGeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT y x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
v1.AddArg(y)
v1.AddArg(x)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGED, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGED, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpGeq64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU y x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v1.AddArg(y)
v1.AddArg(x)
v.AddArg(v1)
func rewriteValueMIPS64_OpGeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT (SignExt8to64 y) (SignExt8to64 x)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
- v2 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v3.AddArg(x)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpGeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU (ZeroExt8to64 y) (ZeroExt8to64 x)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(x)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpGreater16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16 x y)
// cond:
// result: (SGT (SignExt16to64 x) (SignExt16to64 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGT)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpGreater16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16U x y)
// cond:
// result: (SGTU (ZeroExt16to64 x) (ZeroExt16to64 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpGreater32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater32 x y)
// cond:
// result: (SGT (SignExt32to64 x) (SignExt32to64 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGT)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpGreater32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater32U x y)
// cond:
// result: (SGTU (ZeroExt32to64 x) (ZeroExt32to64 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpGreater8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8 x y)
// cond:
// result: (SGT (SignExt8to64 x) (SignExt8to64 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGT)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpGreater8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8U x y)
// cond:
// result: (SGTU (ZeroExt8to64 x) (ZeroExt8to64 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpHmul32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32 x y)
// cond:
- // result: (SRAVconst (Select1 <types.Int64> (MULV (SignExt32to64 x) (SignExt32to64 y))) [32])
+ // result: (SRAVconst (Select1 <typ.Int64> (MULV (SignExt32to64 x) (SignExt32to64 y))) [32])
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAVconst)
v.AuxInt = 32
- v0 := b.NewValue0(v.Pos, OpSelect1, types.Int64)
- v1 := b.NewValue0(v.Pos, OpMIPS64MULV, MakeTuple(types.Int64, types.Int64))
- v2 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSelect1, typ.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MULV, types.NewTuple(typ.Int64, typ.Int64))
+ v2 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
func rewriteValueMIPS64_OpHmul32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32u x y)
// cond:
- // result: (SRLVconst (Select1 <types.UInt64> (MULVU (ZeroExt32to64 x) (ZeroExt32to64 y))) [32])
+ // result: (SRLVconst (Select1 <typ.UInt64> (MULVU (ZeroExt32to64 x) (ZeroExt32to64 y))) [32])
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRLVconst)
v.AuxInt = 32
- v0 := b.NewValue0(v.Pos, OpSelect1, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpMIPS64MULVU, MakeTuple(types.UInt64, types.UInt64))
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpSelect1, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MULVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
func rewriteValueMIPS64_OpHmul64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul64 x y)
// cond:
// result: (Select0 (MULV x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64MULV, MakeTuple(types.Int64, types.Int64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64MULV, types.NewTuple(typ.Int64, typ.Int64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpHmul64u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul64u x y)
// cond:
// result: (Select0 (MULVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpIsNonNil_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsNonNil ptr)
// cond:
// result: (SGTU ptr (MOVVconst [0]))
ptr := v.Args[0]
v.reset(OpMIPS64SGTU)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
return true
func rewriteValueMIPS64_OpIsSliceInBounds_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsSliceInBounds idx len)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU idx len))
idx := v.Args[0]
len := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v1.AddArg(idx)
v1.AddArg(len)
v.AddArg(v1)
func rewriteValueMIPS64_OpLeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT (SignExt16to64 x) (SignExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
- v2 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpLeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU (ZeroExt16to64 x) (ZeroExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpLeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT (SignExt32to64 x) (SignExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
- v2 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGEF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGEF, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS64_OpLeq32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU (ZeroExt32to64 x) (ZeroExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpLeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
v1.AddArg(x)
v1.AddArg(y)
v.AddArg(v1)
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGED, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGED, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS64_OpLeq64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v1.AddArg(x)
v1.AddArg(y)
v.AddArg(v1)
func rewriteValueMIPS64_OpLeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8 x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGT (SignExt8to64 x) (SignExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGT, types.Bool)
- v2 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGT, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpLeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8U x y)
// cond:
// result: (XOR (MOVVconst [1]) (SGTU (ZeroExt8to64 x) (ZeroExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64XOR)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 1
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v.AddArg(v1)
func rewriteValueMIPS64_OpLess16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16 x y)
// cond:
// result: (SGT (SignExt16to64 y) (SignExt16to64 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGT)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpLess16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16U x y)
// cond:
// result: (SGTU (ZeroExt16to64 y) (ZeroExt16to64 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpLess32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less32 x y)
// cond:
// result: (SGT (SignExt32to64 y) (SignExt32to64 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGT)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v1.AddArg(x)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTF, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS64_OpLess32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less32U x y)
// cond:
// result: (SGTU (ZeroExt32to64 y) (ZeroExt32to64 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagTrue)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPGTD, types.TypeFlags)
v0.AddArg(y)
v0.AddArg(x)
v.AddArg(v0)
func rewriteValueMIPS64_OpLess8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8 x y)
// cond:
// result: (SGT (SignExt8to64 y) (SignExt8to64 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGT)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpLess8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8U x y)
// cond:
// result: (SGTU (ZeroExt8to64 y) (ZeroExt8to64 x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpLsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
func rewriteValueMIPS64_OpLsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
func rewriteValueMIPS64_OpLsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
func rewriteValueMIPS64_OpLsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SLLV <t> x (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SLLV <t> x (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpLsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SLLV <t> x y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SLLV <t> x y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
func rewriteValueMIPS64_OpLsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SLLV <t> x (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SLLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (Select0 (DIVV (SignExt16to64 x) (SignExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
- v1 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
+ v1 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (Select0 (DIVVU (ZeroExt16to64 x) (ZeroExt16to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
- v1 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32 x y)
// cond:
// result: (Select0 (DIVV (SignExt32to64 x) (SignExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
- v1 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
+ v1 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32u x y)
// cond:
// result: (Select0 (DIVVU (ZeroExt32to64 x) (ZeroExt32to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod64 x y)
// cond:
// result: (Select0 (DIVV x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod64u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod64u x y)
// cond:
// result: (Select0 (DIVVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (Select0 (DIVV (SignExt8to64 x) (SignExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, MakeTuple(types.Int64, types.Int64))
- v1 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVV, types.NewTuple(typ.Int64, typ.Int64))
+ v1 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (Select0 (DIVVU (ZeroExt8to64 x) (ZeroExt8to64 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect0)
- v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, MakeTuple(types.UInt64, types.UInt64))
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64DIVVU, types.NewTuple(typ.UInt64, typ.UInt64))
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueMIPS64_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpMIPS64MOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [2] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore dst (MOVHload src mem) mem)
for {
if v.AuxInt != 2 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpMIPS64MOVBstore)
v.AuxInt = 1
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v0.AuxInt = 1
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
return true
}
// match: (Move [4] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore dst (MOVWload src mem) mem)
for {
if v.AuxInt != 4 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPS64MOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVWload, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVWload, typ.Int32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [4] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [2] dst (MOVHload [2] src mem) (MOVHstore dst (MOVHload src mem) mem))
for {
if v.AuxInt != 4 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpMIPS64MOVBstore)
v.AuxInt = 3
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v0.AuxInt = 3
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v2.AuxInt = 2
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v3.AuxInt = 1
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v4.AuxInt = 1
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v5.AddArg(dst)
- v6 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v6 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v6.AddArg(src)
v6.AddArg(mem)
v5.AddArg(v6)
return true
}
// match: (Move [8] {t} dst src mem)
- // cond: t.(Type).Alignment()%8 == 0
+ // cond: t.(*types.Type).Alignment()%8 == 0
// result: (MOVVstore dst (MOVVload src mem) mem)
for {
if v.AuxInt != 8 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%8 == 0) {
+ if !(t.(*types.Type).Alignment()%8 == 0) {
break
}
v.reset(OpMIPS64MOVVstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVload, typ.UInt64)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [8] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [4] dst (MOVWload [4] src mem) (MOVWstore dst (MOVWload src mem) mem))
for {
if v.AuxInt != 8 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPS64MOVWstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVWload, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVWload, typ.Int32)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVWload, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVWload, typ.Int32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
return true
}
// match: (Move [8] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [6] dst (MOVHload [6] src mem) (MOVHstore [4] dst (MOVHload [4] src mem) (MOVHstore [2] dst (MOVHload [2] src mem) (MOVHstore dst (MOVHload src mem) mem))))
for {
if v.AuxInt != 8 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AuxInt = 6
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v0.AuxInt = 6
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v3.AuxInt = 2
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v4.AuxInt = 2
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v5.AddArg(dst)
- v6 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v6 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v6.AddArg(src)
v6.AddArg(mem)
v5.AddArg(v6)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [3] dst src mem)
// cond:
// result: (MOVBstore [2] dst (MOVBload [2] src mem) (MOVBstore [1] dst (MOVBload [1] src mem) (MOVBstore dst (MOVBload src mem) mem)))
v.reset(OpMIPS64MOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v1.AuxInt = 1
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v2.AuxInt = 1
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVBload, types.Int8)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVBload, typ.Int8)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [6] {t} dst src mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [4] dst (MOVHload [4] src mem) (MOVHstore [2] dst (MOVHload [2] src mem) (MOVHstore dst (MOVHload src mem) mem)))
for {
if v.AuxInt != 6 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v2.AuxInt = 2
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVHload, types.Int16)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVHload, typ.Int16)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [12] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [8] dst (MOVWload [8] src mem) (MOVWstore [4] dst (MOVWload [4] src mem) (MOVWstore dst (MOVWload src mem) mem)))
for {
if v.AuxInt != 12 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPS64MOVWstore)
v.AuxInt = 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVWload, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVWload, typ.Int32)
v0.AuxInt = 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVWload, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVWload, typ.Int32)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVWload, types.Int32)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVWload, typ.Int32)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [16] {t} dst src mem)
- // cond: t.(Type).Alignment()%8 == 0
+ // cond: t.(*types.Type).Alignment()%8 == 0
// result: (MOVVstore [8] dst (MOVVload [8] src mem) (MOVVstore dst (MOVVload src mem) mem))
for {
if v.AuxInt != 16 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%8 == 0) {
+ if !(t.(*types.Type).Alignment()%8 == 0) {
break
}
v.reset(OpMIPS64MOVVstore)
v.AuxInt = 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVload, typ.UInt64)
v0.AuxInt = 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVload, typ.UInt64)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
return true
}
// match: (Move [24] {t} dst src mem)
- // cond: t.(Type).Alignment()%8 == 0
+ // cond: t.(*types.Type).Alignment()%8 == 0
// result: (MOVVstore [16] dst (MOVVload [16] src mem) (MOVVstore [8] dst (MOVVload [8] src mem) (MOVVstore dst (MOVVload src mem) mem)))
for {
if v.AuxInt != 24 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%8 == 0) {
+ if !(t.(*types.Type).Alignment()%8 == 0) {
break
}
v.reset(OpMIPS64MOVVstore)
v.AuxInt = 16
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVload, typ.UInt64)
v0.AuxInt = 16
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVload, typ.UInt64)
v2.AuxInt = 8
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVload, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVload, typ.UInt64)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
return true
}
// match: (Move [s] {t} dst src mem)
- // cond: s > 24 || t.(Type).Alignment()%8 != 0
- // result: (LoweredMove [t.(Type).Alignment()] dst src (ADDVconst <src.Type> src [s-moveSize(t.(Type).Alignment(), config)]) mem)
+ // cond: s > 24 || t.(*types.Type).Alignment()%8 != 0
+ // result: (LoweredMove [t.(*types.Type).Alignment()] dst src (ADDVconst <src.Type> src [s-moveSize(t.(*types.Type).Alignment(), config)]) mem)
for {
s := v.AuxInt
t := v.Aux
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(s > 24 || t.(Type).Alignment()%8 != 0) {
+ if !(s > 24 || t.(*types.Type).Alignment()%8 != 0) {
break
}
v.reset(OpMIPS64LoweredMove)
- v.AuxInt = t.(Type).Alignment()
+ v.AuxInt = t.(*types.Type).Alignment()
v.AddArg(dst)
v.AddArg(src)
v0 := b.NewValue0(v.Pos, OpMIPS64ADDVconst, src.Type)
- v0.AuxInt = s - moveSize(t.(Type).Alignment(), config)
+ v0.AuxInt = s - moveSize(t.(*types.Type).Alignment(), config)
v0.AddArg(src)
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueMIPS64_OpMul16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul16 x y)
// cond:
// result: (Select1 (MULVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpMul32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul32 x y)
// cond:
// result: (Select1 (MULVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpMul64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul64 x y)
// cond:
// result: (Select1 (MULVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpMul8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul8 x y)
// cond:
// result: (Select1 (MULVU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpSelect1)
- v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, MakeTuple(types.UInt64, types.UInt64))
+ v0 := b.NewValue0(v.Pos, OpMIPS64MULVU, types.NewTuple(typ.UInt64, typ.UInt64))
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpNeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq16 x y)
// cond:
// result: (SGTU (XOR (ZeroExt16to32 x) (ZeroExt16to64 y)) (MOVVconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v3.AuxInt = 0
v.AddArg(v3)
return true
func rewriteValueMIPS64_OpNeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq32 x y)
// cond:
// result: (SGTU (XOR (ZeroExt32to64 x) (ZeroExt32to64 y)) (MOVVconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v3.AuxInt = 0
v.AddArg(v3)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagFalse)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQF, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQF, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpNeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq64 x y)
// cond:
// result: (SGTU (XOR x y) (MOVVconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v1.AuxInt = 0
v.AddArg(v1)
return true
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64FPFlagFalse)
- v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQD, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpMIPS64CMPEQD, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValueMIPS64_OpNeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq8 x y)
// cond:
// result: (SGTU (XOR (ZeroExt8to64 x) (ZeroExt8to64 y)) (MOVVconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v3.AuxInt = 0
v.AddArg(v3)
return true
func rewriteValueMIPS64_OpNeqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NeqPtr x y)
// cond:
// result: (SGTU (XOR x y) (MOVVconst [0]))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SGTU)
- v0 := b.NewValue0(v.Pos, OpMIPS64XOR, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64XOR, typ.UInt64)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v1.AuxInt = 0
v.AddArg(v1)
return true
func rewriteValueMIPS64_OpRsh16Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh16Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> (ZeroExt16to64 x) y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> (ZeroExt16to64 x) y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(x)
v3.AddArg(v4)
v3.AddArg(y)
func rewriteValueMIPS64_OpRsh16Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt16to64 x) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x16 <t> x y)
// cond:
- // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
+ // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x32 <t> x y)
// cond:
- // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
+ // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 <t> x y)
// cond:
- // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
+ // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v3.AddArg(y)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 63
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueMIPS64_OpRsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x8 <t> x y)
// cond:
- // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
+ // result: (SRAV (SignExt16to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt16to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh32Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh32Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh32Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> (ZeroExt32to64 x) y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> (ZeroExt32to64 x) y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(x)
v3.AddArg(v4)
v3.AddArg(y)
func rewriteValueMIPS64_OpRsh32Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt32to64 x) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x16 <t> x y)
// cond:
- // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
+ // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x32 <t> x y)
// cond:
- // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
+ // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x64 <t> x y)
// cond:
- // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
+ // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v3.AddArg(y)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 63
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueMIPS64_OpRsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x8 <t> x y)
// cond:
- // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
+ // result: (SRAV (SignExt32to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh64Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> x (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> x (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh64Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> x (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> x (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh64Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> x y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> x y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
func rewriteValueMIPS64_OpRsh64Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> x (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> x (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x16 <t> x y)
// cond:
- // result: (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
+ // result: (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v1 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 63
v2.AddArg(v4)
v1.AddArg(v2)
v0.AddArg(v1)
- v5 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v5.AddArg(y)
v0.AddArg(v5)
v.AddArg(v0)
func rewriteValueMIPS64_OpRsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x32 <t> x y)
// cond:
- // result: (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
+ // result: (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v1 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 63
v2.AddArg(v4)
v1.AddArg(v2)
v0.AddArg(v1)
- v5 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v5.AddArg(y)
v0.AddArg(v5)
v.AddArg(v0)
func rewriteValueMIPS64_OpRsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x64 <t> x y)
// cond:
- // result: (SRAV x (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
+ // result: (SRAV x (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
for {
t := v.Type
x := v.Args[0]
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v1 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v2.AddArg(y)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 63
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueMIPS64_OpRsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x8 <t> x y)
// cond:
- // result: (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
+ // result: (SRAV x (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
v.AddArg(x)
v0 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v1 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 63
v2.AddArg(v4)
v1.AddArg(v2)
v0.AddArg(v1)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(y)
v0.AddArg(v5)
v.AddArg(v0)
func rewriteValueMIPS64_OpRsh8Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux16 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt16to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt16to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh8Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux32 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt32to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt32to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) y)) (SRLV <t> (ZeroExt8to64 x) y))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) y)) (SRLV <t> (ZeroExt8to64 x) y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
v1.AddArg(y)
v0.AddArg(v1)
v.AddArg(v0)
v3 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(x)
v3.AddArg(v4)
v3.AddArg(y)
func rewriteValueMIPS64_OpRsh8Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux8 <t> x y)
// cond:
- // result: (AND (NEGV <t> (SGTU (Const64 <types.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt8to64 y)))
+ // result: (AND (NEGV <t> (SGTU (Const64 <typ.UInt64> [64]) (ZeroExt8to64 y))) (SRLV <t> (ZeroExt8to64 x) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64AND)
v0 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 64
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpMIPS64SRLV, t)
- v5 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v6.AddArg(y)
v4.AddArg(v6)
v.AddArg(v4)
func rewriteValueMIPS64_OpRsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x16 <t> x y)
// cond:
- // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt16to64 y)))
+ // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt16to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt16to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x32 <t> x y)
// cond:
- // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt32to64 y)))
+ // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt32to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt32to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
func rewriteValueMIPS64_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 <t> x y)
// cond:
- // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <types.UInt64> [63]))) y))
+ // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU y (Const64 <typ.UInt64> [63]))) y))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
v3.AddArg(y)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 63
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueMIPS64_OpRsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x8 <t> x y)
// cond:
- // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <types.UInt64> [63]))) (ZeroExt8to64 y)))
+ // result: (SRAV (SignExt8to64 x) (OR <t> (NEGV <t> (SGTU (ZeroExt8to64 y) (Const64 <typ.UInt64> [63]))) (ZeroExt8to64 y)))
for {
t := v.Type
x := v.Args[0]
y := v.Args[1]
v.reset(OpMIPS64SRAV)
- v0 := b.NewValue0(v.Pos, OpSignExt8to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to64, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpMIPS64OR, t)
v2 := b.NewValue0(v.Pos, OpMIPS64NEGV, t)
- v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, types.Bool)
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpMIPS64SGTU, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v3.AddArg(v5)
v2.AddArg(v3)
v1.AddArg(v2)
- v6 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v6.AddArg(y)
v1.AddArg(v6)
v.AddArg(v1)
}
func rewriteValueMIPS64_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpMIPS64MOVBstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVHstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpMIPS64MOVHstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && !is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && !is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && !is32BitFloat(val.Type)) {
break
}
v.reset(OpMIPS64MOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && !is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type)
// result: (MOVVstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && !is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && !is64BitFloat(val.Type)) {
break
}
v.reset(OpMIPS64MOVVstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (MOVFstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpMIPS64MOVFstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (MOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpMIPS64MOVDstore)
func rewriteValueMIPS64_OpZero_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [0] _ mem)
// cond:
// result: mem
mem := v.Args[1]
v.reset(OpMIPS64MOVBstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [2] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore ptr (MOVVconst [0]) mem)
for {
if v.AuxInt != 2 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
v.reset(OpMIPS64MOVBstore)
v.AuxInt = 1
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
return true
}
// match: (Zero [4] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore ptr (MOVVconst [0]) mem)
for {
if v.AuxInt != 4 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPS64MOVWstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [4] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [2] ptr (MOVVconst [0]) (MOVHstore [0] ptr (MOVVconst [0]) mem))
for {
if v.AuxInt != 4 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
v.reset(OpMIPS64MOVBstore)
v.AuxInt = 3
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v3.AuxInt = 1
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v5.AuxInt = 0
v5.AddArg(ptr)
- v6 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v6.AuxInt = 0
v5.AddArg(v6)
v5.AddArg(mem)
return true
}
// match: (Zero [8] {t} ptr mem)
- // cond: t.(Type).Alignment()%8 == 0
+ // cond: t.(*types.Type).Alignment()%8 == 0
// result: (MOVVstore ptr (MOVVconst [0]) mem)
for {
if v.AuxInt != 8 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%8 == 0) {
+ if !(t.(*types.Type).Alignment()%8 == 0) {
break
}
v.reset(OpMIPS64MOVVstore)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(mem)
return true
}
// match: (Zero [8] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [4] ptr (MOVVconst [0]) (MOVWstore [0] ptr (MOVVconst [0]) mem))
for {
if v.AuxInt != 8 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPS64MOVWstore)
v.AuxInt = 4
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
return true
}
// match: (Zero [8] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [6] ptr (MOVVconst [0]) (MOVHstore [4] ptr (MOVVconst [0]) (MOVHstore [2] ptr (MOVVconst [0]) (MOVHstore [0] ptr (MOVVconst [0]) mem))))
for {
if v.AuxInt != 8 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AuxInt = 6
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v3.AuxInt = 2
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v5.AuxInt = 0
v5.AddArg(ptr)
- v6 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v6.AuxInt = 0
v5.AddArg(v6)
v5.AddArg(mem)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Zero [3] ptr mem)
// cond:
// result: (MOVBstore [2] ptr (MOVVconst [0]) (MOVBstore [1] ptr (MOVVconst [0]) (MOVBstore [0] ptr (MOVVconst [0]) mem)))
v.reset(OpMIPS64MOVBstore)
v.AuxInt = 2
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v1.AuxInt = 1
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVBstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [6] {t} ptr mem)
- // cond: t.(Type).Alignment()%2 == 0
+ // cond: t.(*types.Type).Alignment()%2 == 0
// result: (MOVHstore [4] ptr (MOVVconst [0]) (MOVHstore [2] ptr (MOVVconst [0]) (MOVHstore [0] ptr (MOVVconst [0]) mem)))
for {
if v.AuxInt != 6 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%2 == 0) {
+ if !(t.(*types.Type).Alignment()%2 == 0) {
break
}
v.reset(OpMIPS64MOVHstore)
v.AuxInt = 4
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v1.AuxInt = 2
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVHstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [12] {t} ptr mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVWstore [8] ptr (MOVVconst [0]) (MOVWstore [4] ptr (MOVVconst [0]) (MOVWstore [0] ptr (MOVVconst [0]) mem)))
for {
if v.AuxInt != 12 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpMIPS64MOVWstore)
v.AuxInt = 8
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVWstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [16] {t} ptr mem)
- // cond: t.(Type).Alignment()%8 == 0
+ // cond: t.(*types.Type).Alignment()%8 == 0
// result: (MOVVstore [8] ptr (MOVVconst [0]) (MOVVstore [0] ptr (MOVVconst [0]) mem))
for {
if v.AuxInt != 16 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%8 == 0) {
+ if !(t.(*types.Type).Alignment()%8 == 0) {
break
}
v.reset(OpMIPS64MOVVstore)
v.AuxInt = 8
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
v1.AddArg(mem)
return true
}
// match: (Zero [24] {t} ptr mem)
- // cond: t.(Type).Alignment()%8 == 0
+ // cond: t.(*types.Type).Alignment()%8 == 0
// result: (MOVVstore [16] ptr (MOVVconst [0]) (MOVVstore [8] ptr (MOVVconst [0]) (MOVVstore [0] ptr (MOVVconst [0]) mem)))
for {
if v.AuxInt != 24 {
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(t.(Type).Alignment()%8 == 0) {
+ if !(t.(*types.Type).Alignment()%8 == 0) {
break
}
v.reset(OpMIPS64MOVVstore)
v.AuxInt = 16
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(ptr)
- v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v2.AuxInt = 0
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpMIPS64MOVVstore, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(ptr)
- v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpMIPS64MOVVconst, typ.UInt64)
v4.AuxInt = 0
v3.AddArg(v4)
v3.AddArg(mem)
return true
}
// match: (Zero [s] {t} ptr mem)
- // cond: s%8 == 0 && s > 24 && s <= 8*128 && t.(Type).Alignment()%8 == 0 && !config.noDuffDevice
+ // cond: s%8 == 0 && s > 24 && s <= 8*128 && t.(*types.Type).Alignment()%8 == 0 && !config.noDuffDevice
// result: (DUFFZERO [8 * (128 - int64(s/8))] ptr mem)
for {
s := v.AuxInt
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !(s%8 == 0 && s > 24 && s <= 8*128 && t.(Type).Alignment()%8 == 0 && !config.noDuffDevice) {
+ if !(s%8 == 0 && s > 24 && s <= 8*128 && t.(*types.Type).Alignment()%8 == 0 && !config.noDuffDevice) {
break
}
v.reset(OpMIPS64DUFFZERO)
return true
}
// match: (Zero [s] {t} ptr mem)
- // cond: (s > 8*128 || config.noDuffDevice) || t.(Type).Alignment()%8 != 0
- // result: (LoweredZero [t.(Type).Alignment()] ptr (ADDVconst <ptr.Type> ptr [s-moveSize(t.(Type).Alignment(), config)]) mem)
+ // cond: (s > 8*128 || config.noDuffDevice) || t.(*types.Type).Alignment()%8 != 0
+ // result: (LoweredZero [t.(*types.Type).Alignment()] ptr (ADDVconst <ptr.Type> ptr [s-moveSize(t.(*types.Type).Alignment(), config)]) mem)
for {
s := v.AuxInt
t := v.Aux
ptr := v.Args[0]
mem := v.Args[1]
- if !((s > 8*128 || config.noDuffDevice) || t.(Type).Alignment()%8 != 0) {
+ if !((s > 8*128 || config.noDuffDevice) || t.(*types.Type).Alignment()%8 != 0) {
break
}
v.reset(OpMIPS64LoweredZero)
- v.AuxInt = t.(Type).Alignment()
+ v.AuxInt = t.(*types.Type).Alignment()
v.AddArg(ptr)
v0 := b.NewValue0(v.Pos, OpMIPS64ADDVconst, ptr.Type)
- v0.AuxInt = s - moveSize(t.(Type).Alignment(), config)
+ v0.AuxInt = s - moveSize(t.(*types.Type).Alignment(), config)
v0.AddArg(ptr)
v.AddArg(v0)
v.AddArg(mem)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockMIPS64EQ:
// match: (EQ (FPFlagTrue cmp) yes no)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValuePPC64(v *Value) bool {
switch v.Op {
func rewriteValuePPC64_OpCvt32Fto32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt32Fto32 x)
// cond:
// result: (Xf2i64 (FCTIWZ x))
for {
x := v.Args[0]
v.reset(OpPPC64Xf2i64)
- v0 := b.NewValue0(v.Pos, OpPPC64FCTIWZ, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCTIWZ, typ.Float64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuePPC64_OpCvt32Fto64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt32Fto64 x)
// cond:
// result: (Xf2i64 (FCTIDZ x))
for {
x := v.Args[0]
v.reset(OpPPC64Xf2i64)
- v0 := b.NewValue0(v.Pos, OpPPC64FCTIDZ, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCTIDZ, typ.Float64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuePPC64_OpCvt32to32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt32to32F x)
// cond:
// result: (FRSP (FCFID (Xi2f64 (SignExt32to64 x))))
for {
x := v.Args[0]
v.reset(OpPPC64FRSP)
- v0 := b.NewValue0(v.Pos, OpPPC64FCFID, types.Float64)
- v1 := b.NewValue0(v.Pos, OpPPC64Xi2f64, types.Float64)
- v2 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCFID, typ.Float64)
+ v1 := b.NewValue0(v.Pos, OpPPC64Xi2f64, typ.Float64)
+ v2 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v2.AddArg(x)
v1.AddArg(v2)
v0.AddArg(v1)
func rewriteValuePPC64_OpCvt32to64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt32to64F x)
// cond:
// result: (FCFID (Xi2f64 (SignExt32to64 x)))
for {
x := v.Args[0]
v.reset(OpPPC64FCFID)
- v0 := b.NewValue0(v.Pos, OpPPC64Xi2f64, types.Float64)
- v1 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64Xi2f64, typ.Float64)
+ v1 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuePPC64_OpCvt64Fto32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt64Fto32 x)
// cond:
// result: (Xf2i64 (FCTIWZ x))
for {
x := v.Args[0]
v.reset(OpPPC64Xf2i64)
- v0 := b.NewValue0(v.Pos, OpPPC64FCTIWZ, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCTIWZ, typ.Float64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuePPC64_OpCvt64Fto64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt64Fto64 x)
// cond:
// result: (Xf2i64 (FCTIDZ x))
for {
x := v.Args[0]
v.reset(OpPPC64Xf2i64)
- v0 := b.NewValue0(v.Pos, OpPPC64FCTIDZ, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCTIDZ, typ.Float64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuePPC64_OpCvt64to32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt64to32F x)
// cond:
// result: (FRSP (FCFID (Xi2f64 x)))
for {
x := v.Args[0]
v.reset(OpPPC64FRSP)
- v0 := b.NewValue0(v.Pos, OpPPC64FCFID, types.Float64)
- v1 := b.NewValue0(v.Pos, OpPPC64Xi2f64, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCFID, typ.Float64)
+ v1 := b.NewValue0(v.Pos, OpPPC64Xi2f64, typ.Float64)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuePPC64_OpCvt64to64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Cvt64to64F x)
// cond:
// result: (FCFID (Xi2f64 x))
for {
x := v.Args[0]
v.reset(OpPPC64FCFID)
- v0 := b.NewValue0(v.Pos, OpPPC64Xi2f64, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpPPC64Xi2f64, typ.Float64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuePPC64_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (DIVW (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64DIVW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (DIVWU (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64DIVWU)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (DIVW (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64DIVW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (DIVWU (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64DIVWU)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpEq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq16 x y)
// cond: isSigned(x.Type) && isSigned(y.Type)
// result: (Equal (CMPW (SignExt16to32 x) (SignExt16to32 y)))
break
}
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpEq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq8 x y)
// cond: isSigned(x.Type) && isSigned(y.Type)
// result: (Equal (CMPW (SignExt8to32 x) (SignExt8to32 y)))
break
}
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpEqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqB x y)
// cond:
// result: (ANDconst [1] (EQV x y))
y := v.Args[1]
v.reset(OpPPC64ANDconst)
v.AuxInt = 1
- v0 := b.NewValue0(v.Pos, OpPPC64EQV, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64EQV, typ.Int64)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64Equal)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpGeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16 x y)
// cond:
// result: (GreaterEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpGeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16U x y)
// cond:
// result: (GreaterEqual (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FGreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpGeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8 x y)
// cond:
// result: (GreaterEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpGeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8U x y)
// cond:
// result: (GreaterEqual (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpGreater16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16 x y)
// cond:
// result: (GreaterThan (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpGreater16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16U x y)
// cond:
// result: (GreaterThan (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FGreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FGreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpGreater8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8 x y)
// cond:
// result: (GreaterThan (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpGreater8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8U x y)
// cond:
// result: (GreaterThan (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64GreaterThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPU, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
for {
ptr := v.Args[0]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPconst, types.TypeFlags)
v0.AuxInt = 0
v0.AddArg(ptr)
v.AddArg(v0)
idx := v.Args[0]
len := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPU, types.TypeFlags)
v0.AddArg(idx)
v0.AddArg(len)
v.AddArg(v0)
func rewriteValuePPC64_OpLeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16 x y)
// cond:
// result: (LessEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpLeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16U x y)
// cond:
// result: (LessEqual (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FLessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FLessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpLeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8 x y)
// cond:
// result: (LessEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpLeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8U x y)
// cond:
// result: (LessEqual (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpLess16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16 x y)
// cond:
// result: (LessThan (CMPW (SignExt16to32 x) (SignExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpLess16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16U x y)
// cond:
// result: (LessThan (CMPWU (ZeroExt16to32 x) (ZeroExt16to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FLessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64FLessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpLess8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8 x y)
// cond:
// result: (LessThan (CMPW (SignExt8to32 x) (SignExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpLess8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8U x y)
// cond:
// result: (LessThan (CMPWU (ZeroExt8to32 x) (ZeroExt8to32 y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64LessThan)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWU, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValuePPC64_OpLoad_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Load <t> ptr mem)
// cond: (is64BitInt(t) || isPtr(t))
// result: (MOVDload ptr mem)
break
}
v.reset(OpPPC64MOVBreg)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, typ.UInt8)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
func rewriteValuePPC64_OpLsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x16 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -16
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x32 x (Const64 [c]))
// cond: uint32(c) < 16
// result: (SLWconst x [c])
}
// match: (Lsh16x32 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -16
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x64 x (Const64 [c]))
// cond: uint64(c) < 16
// result: (SLWconst x [c])
}
// match: (Lsh16x64 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -16
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x8 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -16
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x16 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x32 x (Const64 [c]))
// cond: uint32(c) < 32
// result: (SLWconst x [c])
}
// match: (Lsh32x32 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x64 x (Const64 [c]))
// cond: uint64(c) < 32
// result: (SLWconst x [c])
}
// match: (Lsh32x64 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x8 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x16 x y)
// cond:
- // result: (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
+ // result: (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x32 x (Const64 [c]))
// cond: uint32(c) < 64
// result: (SLDconst x [c])
}
// match: (Lsh64x32 x y)
// cond:
- // result: (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
+ // result: (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x64 x (Const64 [c]))
// cond: uint64(c) < 64
// result: (SLDconst x [c])
}
// match: (Lsh64x64 x y)
// cond:
- // result: (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
+ // result: (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x8 x y)
// cond:
- // result: (SLD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
+ // result: (SLD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x16 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -8
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x32 x (Const64 [c]))
// cond: uint32(c) < 8
// result: (SLWconst x [c])
}
// match: (Lsh8x32 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -8
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x64 x (Const64 [c]))
// cond: uint64(c) < 8
// result: (SLWconst x [c])
}
// match: (Lsh8x64 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -8
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpLsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x8 x y)
// cond:
- // result: (SLW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
+ // result: (SLW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SLW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -8
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (Mod32 (SignExt16to32 x) (SignExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (Mod32u (ZeroExt16to32 x) (ZeroExt16to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32u)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpMod32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32 x y)
// cond:
// result: (SUB x (MULLW y (DIVW x y)))
y := v.Args[1]
v.reset(OpPPC64SUB)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64MULLW, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64MULLW, typ.Int32)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64DIVW, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpPPC64DIVW, typ.Int32)
v1.AddArg(x)
v1.AddArg(y)
v0.AddArg(v1)
func rewriteValuePPC64_OpMod32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32u x y)
// cond:
// result: (SUB x (MULLW y (DIVWU x y)))
y := v.Args[1]
v.reset(OpPPC64SUB)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64MULLW, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64MULLW, typ.Int32)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64DIVWU, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpPPC64DIVWU, typ.Int32)
v1.AddArg(x)
v1.AddArg(y)
v0.AddArg(v1)
func rewriteValuePPC64_OpMod64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod64 x y)
// cond:
// result: (SUB x (MULLD y (DIVD x y)))
y := v.Args[1]
v.reset(OpPPC64SUB)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64MULLD, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64MULLD, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64DIVD, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64DIVD, typ.Int64)
v1.AddArg(x)
v1.AddArg(y)
v0.AddArg(v1)
func rewriteValuePPC64_OpMod64u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod64u x y)
// cond:
// result: (SUB x (MULLD y (DIVDU x y)))
y := v.Args[1]
v.reset(OpPPC64SUB)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64MULLD, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64MULLD, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64DIVDU, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64DIVDU, typ.Int64)
v1.AddArg(x)
v1.AddArg(y)
v0.AddArg(v1)
func rewriteValuePPC64_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (Mod32 (SignExt8to32 x) (SignExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (Mod32u (ZeroExt8to32 x) (ZeroExt8to32 y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpMod32u)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuePPC64_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpPPC64MOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpPPC64MOVHstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVHZload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpPPC64MOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVWZload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
return true
}
// match: (Move [8] {t} dst src mem)
- // cond: t.(Type).Alignment()%4 == 0
+ // cond: t.(*types.Type).Alignment()%4 == 0
// result: (MOVDstore dst (MOVDload src mem) mem)
for {
if v.AuxInt != 8 {
dst := v.Args[0]
src := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Alignment()%4 == 0) {
+ if !(t.(*types.Type).Alignment()%4 == 0) {
break
}
v.reset(OpPPC64MOVDstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDload, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDload, typ.Int64)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpPPC64MOVWstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVWZload, typ.UInt32)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVWZload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpPPC64MOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVHstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVHload, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVHload, typ.Int16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpPPC64MOVBstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, typ.UInt8)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVWZload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpPPC64MOVHstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVHZload, typ.UInt16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVWZload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpPPC64MOVBstore)
v.AuxInt = 6
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVBZload, typ.UInt8)
v0.AuxInt = 6
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVHZload, typ.UInt16)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpPPC64MOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpPPC64MOVWstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpPPC64MOVWZload, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpPPC64MOVWZload, typ.UInt32)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
func rewriteValuePPC64_OpNeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq16 x y)
// cond: isSigned(x.Type) && isSigned(y.Type)
// result: (NotEqual (CMPW (SignExt16to32 x) (SignExt16to32 y)))
break
}
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64FCMPU, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpNeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq8 x y)
// cond: isSigned(x.Type) && isSigned(y.Type)
// result: (NotEqual (CMPW (SignExt8to32 x) (SignExt8to32 y)))
break
}
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPW, TypeFlags)
- v1 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPW, types.TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64NotEqual)
- v0 := b.NewValue0(v.Pos, OpPPC64CMP, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMP, types.TypeFlags)
v0.AddArg(x)
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpOffPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OffPtr [off] ptr)
// cond:
- // result: (ADD (MOVDconst <types.Int64> [off]) ptr)
+ // result: (ADD (MOVDconst <typ.Int64> [off]) ptr)
for {
off := v.AuxInt
ptr := v.Args[0]
v.reset(OpPPC64ADD)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDconst, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDconst, typ.Int64)
v0.AuxInt = off
v.AddArg(v0)
v.AddArg(ptr)
break
}
v.reset(OpPPC64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpPPC64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPUconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPUconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpPPC64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(y)
v.AddArg(v0)
break
}
v.reset(OpPPC64InvertFlags)
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWUconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWUconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(y)
v.AddArg(v0)
func rewriteValuePPC64_OpRsh16Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux16 x y)
// cond:
- // result: (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
+ // result: (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux32 x (Const64 [c]))
// cond: uint32(c) < 16
// result: (SRWconst (ZeroExt16to32 x) [c])
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh16Ux32 x y)
// cond:
- // result: (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
+ // result: (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 x (Const64 [c]))
// cond: uint64(c) < 16
// result: (SRWconst (ZeroExt16to32 x) [c])
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh16Ux64 x y)
// cond:
- // result: (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
+ // result: (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux8 x y)
// cond:
- // result: (SRW (ZeroExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
+ // result: (SRW (ZeroExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x16 x y)
// cond:
- // result: (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
+ // result: (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x32 x (Const64 [c]))
// cond: uint32(c) < 16
// result: (SRAWconst (SignExt16to32 x) [c])
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh16x32 x y)
// cond:
- // result: (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
+ // result: (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 x (Const64 [c]))
// cond: uint64(c) < 16
// result: (SRAWconst (SignExt16to32 x) [c])
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = 63
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh16x64 x y)
// cond:
- // result: (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
+ // result: (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x8 x y)
// cond:
- // result: (SRAW (SignExt16to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
+ // result: (SRAW (SignExt16to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-16] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -16
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh32Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux16 x y)
// cond:
- // result: (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
+ // result: (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux32 x (Const64 [c]))
// cond: uint32(c) < 32
// result: (SRWconst x [c])
}
// match: (Rsh32Ux32 x y)
// cond:
- // result: (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
+ // result: (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux64 x (Const64 [c]))
// cond: uint64(c) < 32
// result: (SRWconst x [c])
}
// match: (Rsh32Ux64 x y)
// cond:
- // result: (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
+ // result: (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux8 x y)
// cond:
- // result: (SRW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
+ // result: (SRW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x16 x y)
// cond:
- // result: (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
+ // result: (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x32 x (Const64 [c]))
// cond: uint32(c) < 32
// result: (SRAWconst x [c])
}
// match: (Rsh32x32 x y)
// cond:
- // result: (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
+ // result: (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x64 x (Const64 [c]))
// cond: uint64(c) < 32
// result: (SRAWconst x [c])
}
// match: (Rsh32x64 x y)
// cond:
- // result: (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
+ // result: (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x8 x y)
// cond:
- // result: (SRAW x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
+ // result: (SRAW x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-32] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -32
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux16 x y)
// cond:
- // result: (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
+ // result: (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux32 x (Const64 [c]))
// cond: uint32(c) < 64
// result: (SRDconst x [c])
}
// match: (Rsh64Ux32 x y)
// cond:
- // result: (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
+ // result: (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux64 x (Const64 [c]))
// cond: uint64(c) < 64
// result: (SRDconst x [c])
}
// match: (Rsh64Ux64 x y)
// cond:
- // result: (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
+ // result: (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux8 x y)
// cond:
- // result: (SRD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
+ // result: (SRD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x16 x y)
// cond:
- // result: (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
+ // result: (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x32 x (Const64 [c]))
// cond: uint32(c) < 64
// result: (SRADconst x [c])
}
// match: (Rsh64x32 x y)
// cond:
- // result: (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
+ // result: (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x64 x (Const64 [c]))
// cond: uint64(c) < 64
// result: (SRADconst x [c])
}
// match: (Rsh64x64 x y)
// cond:
- // result: (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
+ // result: (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x8 x y)
// cond:
- // result: (SRAD x (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
+ // result: (SRAD x (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-64] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAD)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v0.AddArg(y)
- v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v1 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v2 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v2.AuxInt = -64
- v3 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValuePPC64_OpRsh8Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux16 x y)
// cond:
- // result: (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
+ // result: (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux32 x (Const64 [c]))
// cond: uint32(c) < 8
// result: (SRWconst (ZeroExt8to32 x) [c])
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh8Ux32 x y)
// cond:
- // result: (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
+ // result: (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 x (Const64 [c]))
// cond: uint64(c) < 8
// result: (SRWconst (ZeroExt8to32 x) [c])
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh8Ux64 x y)
// cond:
- // result: (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
+ // result: (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux8 x y)
// cond:
- // result: (SRW (ZeroExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
+ // result: (SRW (ZeroExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRW)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x16 x y)
// cond:
- // result: (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
+ // result: (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt16to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
- v4 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x32 x (Const64 [c]))
// cond: uint32(c) < 8
// result: (SRAWconst (SignExt8to32 x) [c])
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh8x32 x y)
// cond:
- // result: (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
+ // result: (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt32to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 x (Const64 [c]))
// cond: uint64(c) < 8
// result: (SRAWconst (SignExt8to32 x) [c])
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = 63
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
v.reset(OpPPC64SRAWconst)
v.AuxInt = c
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh8x64 x y)
// cond:
- // result: (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
+ // result: (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValuePPC64_OpRsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x8 x y)
// cond:
- // result: (SRAW (SignExt8to32 x) (ORN y <types.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
+ // result: (SRAW (SignExt8to32 x) (ORN y <typ.Int64> (MaskIfNotCarry (ADDconstForCarry [-8] (ZeroExt8to64 y)))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpPPC64SRAW)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpPPC64ORN, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpPPC64ORN, typ.Int64)
v1.AddArg(y)
- v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, types.Int64)
- v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpPPC64MaskIfNotCarry, typ.Int64)
+ v3 := b.NewValue0(v.Pos, OpPPC64ADDconstForCarry, types.TypeFlags)
v3.AuxInt = -8
- v4 := b.NewValue0(v.Pos, OpZeroExt8to64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt8to64, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
}
func rewriteValuePPC64_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (FMOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpPPC64FMOVDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is32BitFloat(val.Type)
// result: (FMOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is32BitFloat(val.Type)) {
break
}
v.reset(OpPPC64FMOVDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (FMOVSstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpPPC64FMOVSstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && (is64BitInt(val.Type) || isPtr(val.Type))
+ // cond: t.(*types.Type).Size() == 8 && (is64BitInt(val.Type) || isPtr(val.Type))
// result: (MOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && (is64BitInt(val.Type) || isPtr(val.Type))) {
+ if !(t.(*types.Type).Size() == 8 && (is64BitInt(val.Type) || isPtr(val.Type))) {
break
}
v.reset(OpPPC64MOVDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitInt(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitInt(val.Type)
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitInt(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitInt(val.Type)) {
break
}
v.reset(OpPPC64MOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVHstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpPPC64MOVHstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpPPC64MOVBstore)
v.reset(OpPPC64MOVBstorezero)
v.AuxInt = 2
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVHstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVHstorezero, types.TypeMem)
v0.AddArg(destptr)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpPPC64MOVBstorezero)
v.AuxInt = 4
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVWstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVWstorezero, types.TypeMem)
v0.AddArg(destptr)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpPPC64MOVHstorezero)
v.AuxInt = 4
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVWstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVWstorezero, types.TypeMem)
v0.AddArg(destptr)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpPPC64MOVBstorezero)
v.AuxInt = 6
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVHstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVHstorezero, types.TypeMem)
v0.AuxInt = 4
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVWstorezero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVWstorezero, types.TypeMem)
v1.AddArg(destptr)
v1.AddArg(mem)
v0.AddArg(v1)
v.reset(OpPPC64MOVWstorezero)
v.AuxInt = 8
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpPPC64MOVDstorezero)
v.AuxInt = 8
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpPPC64MOVDstorezero)
v.AuxInt = 16
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 8
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v1.AuxInt = 0
v1.AddArg(destptr)
v1.AddArg(mem)
v.reset(OpPPC64MOVDstorezero)
v.AuxInt = 24
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 16
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v2.AuxInt = 0
v2.AddArg(destptr)
v2.AddArg(mem)
v.reset(OpPPC64MOVDstorezero)
v.AuxInt = 32
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 24
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v1.AuxInt = 16
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v2.AuxInt = 8
v2.AddArg(destptr)
- v3 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v3.AuxInt = 0
v3.AddArg(destptr)
v3.AddArg(mem)
v.reset(OpPPC64MOVDstorezero)
v.AuxInt = 40
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 32
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v1.AuxInt = 24
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v2.AuxInt = 16
v2.AddArg(destptr)
- v3 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v3.AuxInt = 8
v3.AddArg(destptr)
- v4 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v4 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v4.AuxInt = 0
v4.AddArg(destptr)
v4.AddArg(mem)
v.reset(OpPPC64MOVDstorezero)
v.AuxInt = 48
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v0.AuxInt = 40
v0.AddArg(destptr)
- v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v1.AuxInt = 32
v1.AddArg(destptr)
- v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v2.AuxInt = 24
v2.AddArg(destptr)
- v3 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v3.AuxInt = 16
v3.AddArg(destptr)
- v4 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v4 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v4.AuxInt = 8
v4.AddArg(destptr)
- v5 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpPPC64MOVDstorezero, types.TypeMem)
v5.AuxInt = 0
v5.AddArg(destptr)
v5.AddArg(mem)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockPPC64EQ:
// match: (EQ (CMPconst [0] (ANDconst [c] x)) yes no)
c := v_0.AuxInt
x := v_0.Args[0]
b.Kind = BlockPPC64EQ
- v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
b.SetControl(v0)
c := v_0.AuxInt
x := v_0.Args[0]
b.Kind = BlockPPC64EQ
- v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
b.SetControl(v0)
_ = v
cond := b.Control
b.Kind = BlockPPC64NE
- v0 := b.NewValue0(v.Pos, OpPPC64CMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64CMPWconst, types.TypeFlags)
v0.AuxInt = 0
v0.AddArg(cond)
b.SetControl(v0)
c := v_0.AuxInt
x := v_0.Args[0]
b.Kind = BlockPPC64NE
- v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
b.SetControl(v0)
c := v_0.AuxInt
x := v_0.Args[0]
b.Kind = BlockPPC64NE
- v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpPPC64ANDCCconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
b.SetControl(v0)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValueS390X(v *Value) bool {
switch v.Op {
func rewriteValueS390X_OpAtomicAdd32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicAdd32 ptr val mem)
// cond:
// result: (AddTupleFirst32 (LAA ptr val mem) val)
val := v.Args[1]
mem := v.Args[2]
v.reset(OpS390XAddTupleFirst32)
- v0 := b.NewValue0(v.Pos, OpS390XLAA, MakeTuple(types.UInt32, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpS390XLAA, types.NewTuple(typ.UInt32, types.TypeMem))
v0.AddArg(ptr)
v0.AddArg(val)
v0.AddArg(mem)
func rewriteValueS390X_OpAtomicAdd64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (AtomicAdd64 ptr val mem)
// cond:
// result: (AddTupleFirst64 (LAAG ptr val mem) val)
val := v.Args[1]
mem := v.Args[2]
v.reset(OpS390XAddTupleFirst64)
- v0 := b.NewValue0(v.Pos, OpS390XLAAG, MakeTuple(types.UInt64, TypeMem))
+ v0 := b.NewValue0(v.Pos, OpS390XLAAG, types.NewTuple(typ.UInt64, types.TypeMem))
v0.AddArg(ptr)
v0.AddArg(val)
v0.AddArg(mem)
func rewriteValueS390X_OpBitLen64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitLen64 x)
// cond:
// result: (SUB (MOVDconst [64]) (FLOGR x))
for {
x := v.Args[0]
v.reset(OpS390XSUB)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 64
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XFLOGR, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XFLOGR, typ.UInt64)
v1.AddArg(x)
v.AddArg(v1)
return true
func rewriteValueS390X_OpCtz32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Ctz32 <t> x)
// cond:
// result: (SUB (MOVDconst [64]) (FLOGR (MOVWZreg (ANDW <t> (SUBWconst <t> [1] x) (NOTW <t> x)))))
t := v.Type
x := v.Args[0]
v.reset(OpS390XSUB)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 64
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XFLOGR, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XFLOGR, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v3 := b.NewValue0(v.Pos, OpS390XANDW, t)
v4 := b.NewValue0(v.Pos, OpS390XSUBWconst, t)
v4.AuxInt = 1
func rewriteValueS390X_OpCtz64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Ctz64 <t> x)
// cond:
// result: (SUB (MOVDconst [64]) (FLOGR (AND <t> (SUBconst <t> [1] x) (NOT <t> x))))
t := v.Type
x := v.Args[0]
v.reset(OpS390XSUB)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 64
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XFLOGR, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XFLOGR, typ.UInt64)
v2 := b.NewValue0(v.Pos, OpS390XAND, t)
v3 := b.NewValue0(v.Pos, OpS390XSUBconst, t)
v3.AuxInt = 1
func rewriteValueS390X_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 x y)
// cond:
// result: (DIVW (MOVHreg x) (MOVHreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XDIVW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpDiv16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u x y)
// cond:
// result: (DIVWU (MOVHZreg x) (MOVHZreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XDIVWU)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpDiv32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32 x y)
// cond:
// result: (DIVW (MOVWreg x) y)
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XDIVW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(y)
func rewriteValueS390X_OpDiv32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32u x y)
// cond:
// result: (DIVWU (MOVWZreg x) y)
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XDIVWU)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(y)
func rewriteValueS390X_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 x y)
// cond:
// result: (DIVW (MOVBreg x) (MOVBreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XDIVW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u x y)
// cond:
// result: (DIVWU (MOVBZreg x) (MOVBZreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XDIVWU)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpEq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq16 x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVHreg x) (MOVHreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpEq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq32 x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (CMPW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPW, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPW, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpEq32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq32F x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (FCMPS x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMPS, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMPS, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpEq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq64 x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpEq64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq64F x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (FCMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpEq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq8 x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpEqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqB x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpEqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqPtr x y)
// cond:
// result: (MOVDEQ (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDEQ)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16 x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVHreg x) (MOVHreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq16U x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVHZreg x) (MOVHZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32 x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMPW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPW, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPW, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32F x y)
// cond:
// result: (MOVDGEnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMPS x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGEnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMPS, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMPS, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq32U x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMPWU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64 x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64F x y)
// cond:
// result: (MOVDGEnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGEnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64U x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMPU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8 x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq8U x y)
// cond:
// result: (MOVDGE (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVBZreg x) (MOVBZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGreater16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16 x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVHreg x) (MOVHreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGreater16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater16U x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVHZreg x) (MOVHZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGreater32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater32 x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMPW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPW, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPW, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGreater32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater32F x y)
// cond:
// result: (MOVDGTnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMPS x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGTnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMPS, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMPS, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGreater32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater32U x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMPWU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGreater64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater64 x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGreater64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater64F x y)
// cond:
// result: (MOVDGTnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGTnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGreater64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater64U x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMPU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpGreater8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8 x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpGreater8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater8U x y)
// cond:
// result: (MOVDGT (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVBZreg x) (MOVBZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpHmul32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32 x y)
// cond:
// result: (SRDconst [32] (MULLD (MOVWreg x) (MOVWreg y)))
y := v.Args[1]
v.reset(OpS390XSRDconst)
v.AuxInt = 32
- v0 := b.NewValue0(v.Pos, OpS390XMULLD, types.Int64)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMULLD, typ.Int64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWreg, typ.Int64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XMOVWreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWreg, typ.Int64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueS390X_OpHmul32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Hmul32u x y)
// cond:
// result: (SRDconst [32] (MULLD (MOVWZreg x) (MOVWZreg y)))
y := v.Args[1]
v.reset(OpS390XSRDconst)
v.AuxInt = 32
- v0 := b.NewValue0(v.Pos, OpS390XMULLD, types.Int64)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMULLD, typ.Int64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
func rewriteValueS390X_OpIsInBounds_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsInBounds idx len)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMPU idx len))
idx := v.Args[0]
len := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
v2.AddArg(idx)
v2.AddArg(len)
v.AddArg(v2)
func rewriteValueS390X_OpIsNonNil_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsNonNil p)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMPconst p [0]))
for {
p := v.Args[0]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPconst, types.TypeFlags)
v2.AuxInt = 0
v2.AddArg(p)
v.AddArg(v2)
func rewriteValueS390X_OpIsSliceInBounds_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (IsSliceInBounds idx len)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMPU idx len))
idx := v.Args[0]
len := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
v2.AddArg(idx)
v2.AddArg(len)
v.AddArg(v2)
func rewriteValueS390X_OpLeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16 x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVHreg x) (MOVHreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLeq16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq16U x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVHZreg x) (MOVHZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32 x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMPW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPW, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPW, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLeq32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32F x y)
// cond:
// result: (MOVDGEnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMPS y x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGEnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMPS, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMPS, types.TypeFlags)
v2.AddArg(y)
v2.AddArg(x)
v.AddArg(v2)
func rewriteValueS390X_OpLeq32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq32U x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMPWU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64 x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLeq64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64F x y)
// cond:
// result: (MOVDGEnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMP y x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGEnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMP, types.TypeFlags)
v2.AddArg(y)
v2.AddArg(x)
v.AddArg(v2)
func rewriteValueS390X_OpLeq64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64U x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMPU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8 x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLeq8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq8U x y)
// cond:
// result: (MOVDLE (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVBZreg x) (MOVBZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLess16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16 x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVHreg x) (MOVHreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLess16U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less16U x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVHZreg x) (MOVHZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLess32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less32 x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMPW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPW, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPW, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLess32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less32F x y)
// cond:
// result: (MOVDGTnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMPS y x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGTnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMPS, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMPS, types.TypeFlags)
v2.AddArg(y)
v2.AddArg(x)
v.AddArg(v2)
func rewriteValueS390X_OpLess32U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less32U x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMPWU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLess64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less64 x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLess64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less64F x y)
// cond:
// result: (MOVDGTnoinv (MOVDconst [0]) (MOVDconst [1]) (FCMP y x))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDGTnoinv)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMP, types.TypeFlags)
v2.AddArg(y)
v2.AddArg(x)
v.AddArg(v2)
func rewriteValueS390X_OpLess64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less64U x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMPU x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpLess8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8 x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLess8U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less8U x y)
// cond:
// result: (MOVDLT (MOVDconst [0]) (MOVDconst [1]) (CMPU (MOVBZreg x) (MOVBZreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDLT)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPU, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPU, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpLsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x16 <t> x y)
// cond:
// result: (ANDW (SLW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVHZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x8 <t> x y)
// cond:
// result: (ANDW (SLW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVBZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x16 <t> x y)
// cond:
// result: (ANDW (SLW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVHZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x8 <t> x y)
// cond:
// result: (ANDW (SLW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVBZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x16 <t> x y)
// cond:
// result: (AND (SLD <t> x y) (SUBEcarrymask <t> (CMPWUconst (MOVHZreg y) [63])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 63
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 63
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v2.AuxInt = 63
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x8 <t> x y)
// cond:
// result: (AND (SLD <t> x y) (SUBEcarrymask <t> (CMPWUconst (MOVBZreg y) [63])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 63
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x16 <t> x y)
// cond:
// result: (ANDW (SLW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVHZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueS390X_OpLsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x8 <t> x y)
// cond:
// result: (ANDW (SLW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVBZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueS390X_OpMod16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16 x y)
// cond:
// result: (MODW (MOVHreg x) (MOVHreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMODW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpMod16u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod16u x y)
// cond:
// result: (MODWU (MOVHZreg x) (MOVHZreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMODWU)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpMod32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32 x y)
// cond:
// result: (MODW (MOVWreg x) y)
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMODW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(y)
func rewriteValueS390X_OpMod32u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod32u x y)
// cond:
// result: (MODWU (MOVWZreg x) y)
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMODWU)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(y)
func rewriteValueS390X_OpMod8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8 x y)
// cond:
// result: (MODW (MOVBreg x) (MOVBreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMODW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpMod8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mod8u x y)
// cond:
// result: (MODWU (MOVBZreg x) (MOVBZreg y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMODWU)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValueS390X_OpMove_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [0] _ _ mem)
// cond:
// result: mem
mem := v.Args[2]
v.reset(OpS390XMOVBstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, typ.UInt8)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpS390XMOVHstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpS390XMOVWstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
mem := v.Args[2]
v.reset(OpS390XMOVDstore)
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
v.reset(OpS390XMOVDstore)
v.AuxInt = 8
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v0.AuxInt = 8
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpS390XMOVDstore)
v.AuxInt = 16
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v0.AuxInt = 16
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDstore, types.TypeMem)
v1.AuxInt = 8
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v2.AuxInt = 8
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpS390XMOVDstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVDstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
v.reset(OpS390XMOVBstore)
v.AuxInt = 2
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, typ.UInt8)
v0.AuxInt = 2
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpS390XMOVBstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, typ.UInt8)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
v.reset(OpS390XMOVHstore)
v.AuxInt = 4
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v0.AuxInt = 4
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWstore, types.TypeMem)
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
func rewriteValueS390X_OpMove_10(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Move [7] dst src mem)
// cond:
// result: (MOVBstore [6] dst (MOVBZload [6] src mem) (MOVHstore [4] dst (MOVHZload [4] src mem) (MOVWstore dst (MOVWZload src mem) mem)))
v.reset(OpS390XMOVBstore)
v.AuxInt = 6
v.AddArg(dst)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBZload, typ.UInt8)
v0.AuxInt = 6
v0.AddArg(src)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHstore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHstore, types.TypeMem)
v1.AuxInt = 4
v1.AddArg(dst)
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = 4
v2.AddArg(src)
v2.AddArg(mem)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWstore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWstore, types.TypeMem)
v3.AddArg(dst)
- v4 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v4.AddArg(src)
v4.AddArg(mem)
v3.AddArg(v4)
v.AuxInt = makeValAndOff(s-256, 256)
v.AddArg(dst)
v.AddArg(src)
- v0 := b.NewValue0(v.Pos, OpS390XMVC, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMVC, types.TypeMem)
v0.AuxInt = makeValAndOff(256, 0)
v0.AddArg(dst)
v0.AddArg(src)
v.AuxInt = makeValAndOff(s-512, 512)
v.AddArg(dst)
v.AddArg(src)
- v0 := b.NewValue0(v.Pos, OpS390XMVC, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMVC, types.TypeMem)
v0.AuxInt = makeValAndOff(256, 256)
v0.AddArg(dst)
v0.AddArg(src)
- v1 := b.NewValue0(v.Pos, OpS390XMVC, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMVC, types.TypeMem)
v1.AuxInt = makeValAndOff(256, 0)
v1.AddArg(dst)
v1.AddArg(src)
v.AuxInt = makeValAndOff(s-768, 768)
v.AddArg(dst)
v.AddArg(src)
- v0 := b.NewValue0(v.Pos, OpS390XMVC, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMVC, types.TypeMem)
v0.AuxInt = makeValAndOff(256, 512)
v0.AddArg(dst)
v0.AddArg(src)
- v1 := b.NewValue0(v.Pos, OpS390XMVC, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpS390XMVC, types.TypeMem)
v1.AuxInt = makeValAndOff(256, 256)
v1.AddArg(dst)
v1.AddArg(src)
- v2 := b.NewValue0(v.Pos, OpS390XMVC, TypeMem)
+ v2 := b.NewValue0(v.Pos, OpS390XMVC, types.TypeMem)
v2.AuxInt = makeValAndOff(256, 0)
v2.AddArg(dst)
v2.AddArg(src)
func rewriteValueS390X_OpNeg16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neg16 x)
// cond:
// result: (NEGW (MOVHreg x))
for {
x := v.Args[0]
v.reset(OpS390XNEGW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValueS390X_OpNeg8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neg8 x)
// cond:
// result: (NEGW (MOVBreg x))
for {
x := v.Args[0]
v.reset(OpS390XNEGW)
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValueS390X_OpNeq16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq16 x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVHreg x) (MOVHreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpNeq32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq32 x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMPW x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMPW, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPW, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpNeq32F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq32F x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (FCMPS x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMPS, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMPS, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpNeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq64 x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpNeq64F_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq64F x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (FCMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XFCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XFCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpNeq8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq8 x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpNeqB_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NeqB x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMP (MOVBreg x) (MOVBreg y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
- v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v4.AddArg(y)
v2.AddArg(v4)
v.AddArg(v2)
func rewriteValueS390X_OpNeqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NeqPtr x y)
// cond:
// result: (MOVDNE (MOVDconst [0]) (MOVDconst [1]) (CMP x y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpS390XMOVDNE)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = 0
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v1.AuxInt = 1
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpS390XCMP, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMP, types.TypeFlags)
v2.AddArg(x)
v2.AddArg(y)
v.AddArg(v2)
func rewriteValueS390X_OpOffPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OffPtr [off] ptr:(SP))
// cond:
// result: (MOVDaddr [off] ptr)
off := v.AuxInt
ptr := v.Args[0]
v.reset(OpS390XADD)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = off
v.AddArg(v0)
v.AddArg(ptr)
func rewriteValueS390X_OpRsh16Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux16 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVHZreg x) y) (SUBEWcarrymask <t> (CMPWUconst (MOVHZreg y) [15])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 15
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh16Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux32 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVHZreg x) y) (SUBEWcarrymask <t> (CMPWUconst y [15])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 15
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVHZreg x) y) (SUBEWcarrymask <t> (CMPUconst y [15])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v3.AuxInt = 15
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh16Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux8 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVHZreg x) y) (SUBEWcarrymask <t> (CMPWUconst (MOVBZreg y) [15])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 15
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh16x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x16 <t> x y)
// cond:
// result: (SRAW <t> (MOVHreg x) (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVHZreg y) [15])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XORW, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v4.AuxInt = 15
- v5 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh16x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x32 <t> x y)
// cond:
// result: (SRAW <t> (MOVHreg x) (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst y [15])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XORW, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v4.AuxInt = 15
v4.AddArg(y)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 <t> x y)
// cond:
// result: (SRAW <t> (MOVHreg x) (OR <y.Type> y (NOT <y.Type> (SUBEcarrymask <y.Type> (CMPUconst y [15])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XOR, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOT, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v4.AuxInt = 15
v4.AddArg(y)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh16x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x8 <t> x y)
// cond:
// result: (SRAW <t> (MOVHreg x) (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVBZreg y) [15])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XORW, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v4.AuxInt = 15
- v5 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh32Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux16 <t> x y)
// cond:
// result: (ANDW (SRW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVHZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v2.AuxInt = 31
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueS390X_OpRsh32Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux8 <t> x y)
// cond:
// result: (ANDW (SRW <t> x y) (SUBEWcarrymask <t> (CMPWUconst (MOVBZreg y) [31])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 31
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueS390X_OpRsh32x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x16 <t> x y)
// cond:
// result: (SRAW <t> x (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVHZreg y) [31])))))
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 31
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 31
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOT, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v3.AuxInt = 31
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh32x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x8 <t> x y)
// cond:
// result: (SRAW <t> x (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVBZreg y) [31])))))
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 31
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh64Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux16 <t> x y)
// cond:
// result: (AND (SRD <t> x y) (SUBEcarrymask <t> (CMPWUconst (MOVHZreg y) [63])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 63
- v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 63
v2.AddArg(y)
v1.AddArg(v2)
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v2.AuxInt = 63
v2.AddArg(y)
v1.AddArg(v2)
func rewriteValueS390X_OpRsh64Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux8 <t> x y)
// cond:
// result: (AND (SRD <t> x y) (SUBEcarrymask <t> (CMPWUconst (MOVBZreg y) [63])))
v0.AddArg(y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, t)
- v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v2 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v2.AuxInt = 63
- v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v3.AddArg(y)
v2.AddArg(v3)
v1.AddArg(v2)
func rewriteValueS390X_OpRsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x16 <t> x y)
// cond:
// result: (SRAD <t> x (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVHZreg y) [63])))))
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 63
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 63
v3.AddArg(y)
v2.AddArg(v3)
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOT, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v3.AuxInt = 63
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x8 <t> x y)
// cond:
// result: (SRAD <t> x (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVBZreg y) [63])))))
v0.AddArg(y)
v1 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 63
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh8Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux16 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVBZreg x) y) (SUBEWcarrymask <t> (CMPWUconst (MOVHZreg y) [7])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 7
- v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh8Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux32 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVBZreg x) y) (SUBEWcarrymask <t> (CMPWUconst y [7])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 7
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVBZreg x) y) (SUBEWcarrymask <t> (CMPUconst y [7])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v3.AuxInt = 7
v3.AddArg(y)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh8Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux8 <t> x y)
// cond:
// result: (ANDW (SRW <t> (MOVBZreg x) y) (SUBEWcarrymask <t> (CMPWUconst (MOVBZreg y) [7])))
y := v.Args[1]
v.reset(OpS390XANDW)
v0 := b.NewValue0(v.Pos, OpS390XSRW, t)
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v1.AddArg(x)
v0.AddArg(v1)
v0.AddArg(y)
v.AddArg(v0)
v2 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, t)
- v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v3 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v3.AuxInt = 7
- v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v4.AddArg(y)
v3.AddArg(v4)
v2.AddArg(v3)
func rewriteValueS390X_OpRsh8x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x16 <t> x y)
// cond:
// result: (SRAW <t> (MOVBreg x) (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVHZreg y) [7])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XORW, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v4.AuxInt = 7
- v5 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh8x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x32 <t> x y)
// cond:
// result: (SRAW <t> (MOVBreg x) (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst y [7])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XORW, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v4.AuxInt = 7
v4.AddArg(y)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 <t> x y)
// cond:
// result: (SRAW <t> (MOVBreg x) (OR <y.Type> y (NOT <y.Type> (SUBEcarrymask <y.Type> (CMPUconst y [7])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XOR, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOT, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v4.AuxInt = 7
v4.AddArg(y)
v3.AddArg(v4)
func rewriteValueS390X_OpRsh8x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x8 <t> x y)
// cond:
// result: (SRAW <t> (MOVBreg x) (ORW <y.Type> y (NOTW <y.Type> (SUBEWcarrymask <y.Type> (CMPWUconst (MOVBZreg y) [7])))))
y := v.Args[1]
v.reset(OpS390XSRAW)
v.Type = t
- v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVBreg, typ.Int64)
v0.AddArg(x)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XORW, y.Type)
v1.AddArg(y)
v2 := b.NewValue0(v.Pos, OpS390XNOTW, y.Type)
v3 := b.NewValue0(v.Pos, OpS390XSUBEWcarrymask, y.Type)
- v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v4.AuxInt = 7
- v5 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.UInt64)
v5.AddArg(y)
v4.AddArg(v5)
v3.AddArg(v4)
break
}
v.reset(OpS390XInvertFlags)
- v0 := b.NewValue0(v.Pos, OpS390XCMPconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpS390XCMPconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
break
}
v.reset(OpS390XInvertFlags)
- v0 := b.NewValue0(v.Pos, OpS390XCMPUconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpS390XCMPUconst, types.TypeFlags)
v0.AuxInt = int64(uint32(c))
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpS390XInvertFlags)
- v0 := b.NewValue0(v.Pos, OpS390XCMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpS390XCMPWconst, types.TypeFlags)
v0.AuxInt = c
v0.AddArg(x)
v.AddArg(v0)
c := v_0.AuxInt
x := v.Args[1]
v.reset(OpS390XInvertFlags)
- v0 := b.NewValue0(v.Pos, OpS390XCMPWUconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpS390XCMPWUconst, types.TypeFlags)
v0.AuxInt = int64(uint32(c))
v0.AddArg(x)
v.AddArg(v0)
b := v.Block
_ = b
// match: (MOVDnop <t> x)
- // cond: t.Compare(x.Type) == CMPeq
+ // cond: t.Compare(x.Type) == types.CMPeq
// result: x
for {
t := v.Type
x := v.Args[0]
- if !(t.Compare(x.Type) == CMPeq) {
+ if !(t.Compare(x.Type) == types.CMPeq) {
break
}
v.reset(OpCopy)
b := v.Block
_ = b
// match: (MOVDreg <t> x)
- // cond: t.Compare(x.Type) == CMPeq
+ // cond: t.Compare(x.Type) == types.CMPeq
// result: x
for {
t := v.Type
x := v.Args[0]
- if !(t.Compare(x.Type) == CMPeq) {
+ if !(t.Compare(x.Type) == types.CMPeq) {
break
}
v.reset(OpCopy)
func rewriteValueS390X_OpS390XMOVHstoreconst_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVHstoreconst [sc] {s} (ADDconst [off] ptr) mem)
// cond: isU12Bit(ValAndOff(sc).Off()+off)
// result: (MOVHstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
v.AuxInt = ValAndOff(a).Off()
v.Aux = s
v.AddArg(p)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = int64(int32(ValAndOff(c).Val()&0xffff | ValAndOff(a).Val()<<16))
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueS390X_OpS390XMOVWstoreconst_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (MOVWstoreconst [sc] {s} (ADDconst [off] ptr) mem)
// cond: isU12Bit(ValAndOff(sc).Off()+off)
// result: (MOVWstoreconst [ValAndOff(sc).add(off)] {s} ptr mem)
v.AuxInt = ValAndOff(a).Off()
v.Aux = s
v.AddArg(p)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = ValAndOff(c).Val()&0xffffffff | ValAndOff(a).Val()<<32
v.AddArg(v0)
v.AddArg(mem)
func rewriteValueS390X_OpS390XNOT_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NOT x)
// cond: true
// result: (XOR (MOVDconst [-1]) x)
break
}
v.reset(OpS390XXOR)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDconst, typ.UInt64)
v0.AuxInt = -1
v.AddArg(v0)
v.AddArg(x)
func rewriteValueS390X_OpS390XOR_10(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR <t> x g:(MOVDload [off] {sym} ptr mem))
// cond: ptr.Op != OpSB && is20Bit(off) && canMergeLoad(v, g, x) && clobber(g)
// result: (ORload <t> [off] {sym} x ptr mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDload, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XOR_20(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR or:(OR y s1:(SLDconst [j1] x1:(MOVBZload [i1] {s} p mem))) s0:(SLDconst [j0] x0:(MOVBZload [i0] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j1] (MOVHZload [i0] {s} p mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueS390X_OpS390XOR_30(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR sh:(SLDconst [8] x0:(MOVBZloadidx [i0] {s} idx p mem)) x1:(MOVBZloadidx [i1] {s} p idx mem))
// cond: i1 == i0+1 && p.Op != OpSB && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVHZloadidx [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueS390X_OpS390XOR_40(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR sh:(SLDconst [16] x0:(MOVHZloadidx [i0] {s} idx p mem)) x1:(MOVHZloadidx [i1] {s} idx p mem))
// cond: i1 == i0+2 && p.Op != OpSB && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWZloadidx [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDloadidx, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XOR_50(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR s0:(SLDconst [j0] x0:(MOVBZloadidx [i0] {s} idx p mem)) or:(OR s1:(SLDconst [j1] x1:(MOVBZloadidx [i1] {s} p idx mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j1] (MOVHZloadidx [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XOR_60(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR or:(OR y s1:(SLDconst [j1] x1:(MOVBZloadidx [i1] {s} idx p mem))) s0:(SLDconst [j0] x0:(MOVBZloadidx [i0] {s} p idx mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j1] (MOVHZloadidx [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XOR_70(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR s0:(SLDconst [j0] x0:(MOVHZloadidx [i0] {s} idx p mem)) or:(OR y s1:(SLDconst [j1] x1:(MOVHZloadidx [i1] {s} p idx mem))))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j1] (MOVWZloadidx [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XOR_80(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR or:(OR y s1:(SLDconst [j1] x1:(MOVHZloadidx [i1] {s} idx p mem))) s0:(SLDconst [j0] x0:(MOVHZloadidx [i0] {s} idx p mem)))
// cond: i1 == i0+2 && j1 == j0-16 && j1 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j1] (MOVWZloadidx [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRload, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRload, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRload, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XOR_90(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR or:(OR y s0:(SLDconst [j0] x0:(MOVBZload [i0] {s} p mem))) s1:(SLDconst [j1] x1:(MOVBZload [i1] {s} p mem)))
// cond: p.Op != OpSB && i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j0] (MOVHZreg (MOVHBRload [i0] {s} p mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueS390X_OpS390XOR_100(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR sh:(SLDconst [8] x1:(MOVBZloadidx [i1] {s} idx p mem)) x0:(MOVBZloadidx [i0] {s} p idx mem))
// cond: p.Op != OpSB && i1 == i0+1 && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVHZreg (MOVHBRloadidx [i0] {s} p idx mem))
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
func rewriteValueS390X_OpS390XOR_110(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR sh:(SLDconst [16] r1:(MOVHZreg x1:(MOVHBRloadidx [i1] {s} idx p mem))) r0:(MOVHZreg x0:(MOVHBRloadidx [i0] {s} idx p mem)))
// cond: i1 == i0+2 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWZreg (MOVWBRloadidx [i0] {s} p idx mem))
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, types.Int64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVDBRloadidx, typ.Int64)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XOR_120(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR s1:(SLDconst [j1] x1:(MOVBZloadidx [i1] {s} idx p mem)) or:(OR s0:(SLDconst [j0] x0:(MOVBZloadidx [i0] {s} p idx mem)) y))
// cond: p.Op != OpSB && i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j0] (MOVHZreg (MOVHBRloadidx [i0] {s} p idx mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XOR_130(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR or:(OR y s0:(SLDconst [j0] x0:(MOVBZloadidx [i0] {s} idx p mem))) s1:(SLDconst [j1] x1:(MOVBZloadidx [i1] {s} p idx mem)))
// cond: p.Op != OpSB && i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j0] (MOVHZreg (MOVHBRloadidx [i0] {s} p idx mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XOR_140(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR s1:(SLDconst [j1] r1:(MOVHZreg x1:(MOVHBRloadidx [i1] {s} idx p mem))) or:(OR y s0:(SLDconst [j0] r0:(MOVHZreg x0:(MOVHBRloadidx [i0] {s} p idx mem)))))
// cond: i1 == i0+2 && j1 == j0+16 && j0 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j0] (MOVWZreg (MOVWBRloadidx [i0] {s} p idx mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XOR_150(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (OR or:(OR y s0:(SLDconst [j0] r0:(MOVHZreg x0:(MOVHBRloadidx [i0] {s} idx p mem)))) s1:(SLDconst [j1] r1:(MOVHZreg x1:(MOVHBRloadidx [i1] {s} idx p mem))))
// cond: i1 == i0+2 && j1 == j0+16 && j0 % 32 == 0 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (OR <v.Type> (SLDconst <v.Type> [j0] (MOVWZreg (MOVWBRloadidx [i0] {s} p idx mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLDconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVWZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XORW_10(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW <t> g:(MOVWZload [off] {sym} ptr mem) x)
// cond: ptr.Op != OpSB && is20Bit(off) && canMergeLoad(v, g, x) && clobber(g)
// result: (ORWload <t> [off] {sym} x ptr mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XORW_20(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW or:(ORW y s1:(SLWconst [j1] x1:(MOVBZload [i1] {s} p mem))) s0:(SLWconst [j0] x0:(MOVBZload [i0] {s} p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORW <v.Type> (SLWconst <v.Type> [j1] (MOVHZload [i0] {s} p mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZload, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueS390X_OpS390XORW_30(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW x1:(MOVHZloadidx [i1] {s} idx p mem) sh:(SLWconst [16] x0:(MOVHZloadidx [i0] {s} p idx mem)))
// cond: i1 == i0+2 && p.Op != OpSB && x0.Uses == 1 && x1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWZloadidx [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWZloadidx, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XORW_40(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW s0:(SLWconst [j0] x0:(MOVBZloadidx [i0] {s} idx p mem)) or:(ORW s1:(SLWconst [j1] x1:(MOVBZloadidx [i1] {s} idx p mem)) y))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORW <v.Type> (SLWconst <v.Type> [j1] (MOVHZloadidx [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
func rewriteValueS390X_OpS390XORW_50(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW or:(ORW s1:(SLWconst [j1] x1:(MOVBZloadidx [i1] {s} idx p mem)) y) s0:(SLWconst [j0] x0:(MOVBZloadidx [i0] {s} idx p mem)))
// cond: i1 == i0+1 && j1 == j0-8 && j1 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORW <v.Type> (SLWconst <v.Type> [j1] (MOVHZloadidx [i0] {s} p idx mem)) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j1
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZloadidx, typ.UInt16)
v2.AuxInt = i0
v2.Aux = s
v2.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRload, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRload, typ.UInt32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XORW_60(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW or:(ORW y s0:(SLWconst [j0] x0:(MOVBZload [i0] {s} p mem))) s1:(SLWconst [j1] x1:(MOVBZload [i1] {s} p mem)))
// cond: p.Op != OpSB && i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORW <v.Type> (SLWconst <v.Type> [j0] (MOVHZreg (MOVHBRload [i0] {s} p mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, types.UInt16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRload, typ.UInt16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
v.reset(OpCopy)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v1.AuxInt = i0
v1.Aux = s
v1.AddArg(p)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
func rewriteValueS390X_OpS390XORW_70(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW r0:(MOVHZreg x0:(MOVHBRloadidx [i0] {s} idx p mem)) sh:(SLWconst [16] r1:(MOVHZreg x1:(MOVHBRloadidx [i1] {s} p idx mem))))
// cond: i1 == i0+2 && x0.Uses == 1 && x1.Uses == 1 && r0.Uses == 1 && r1.Uses == 1 && sh.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(r0) && clobber(r1) && clobber(sh)
// result: @mergePoint(b,x0,x1) (MOVWBRloadidx [i0] {s} p idx mem)
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
break
}
b = mergePoint(b, x0, x1)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWBRloadidx, typ.Int32)
v.reset(OpCopy)
v.AddArg(v0)
v0.AuxInt = i0
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XORW_80(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW s1:(SLWconst [j1] x1:(MOVBZloadidx [i1] {s} idx p mem)) or:(ORW s0:(SLWconst [j0] x0:(MOVBZloadidx [i0] {s} idx p mem)) y))
// cond: p.Op != OpSB && i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORW <v.Type> (SLWconst <v.Type> [j0] (MOVHZreg (MOVHBRloadidx [i0] {s} p idx mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
func rewriteValueS390X_OpS390XORW_90(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ORW or:(ORW s0:(SLWconst [j0] x0:(MOVBZloadidx [i0] {s} idx p mem)) y) s1:(SLWconst [j1] x1:(MOVBZloadidx [i1] {s} idx p mem)))
// cond: p.Op != OpSB && i1 == i0+1 && j1 == j0+8 && j0 % 16 == 0 && x0.Uses == 1 && x1.Uses == 1 && s0.Uses == 1 && s1.Uses == 1 && or.Uses == 1 && mergePoint(b,x0,x1) != nil && clobber(x0) && clobber(x1) && clobber(s0) && clobber(s1) && clobber(or)
// result: @mergePoint(b,x0,x1) (ORW <v.Type> (SLWconst <v.Type> [j0] (MOVHZreg (MOVHBRloadidx [i0] {s} p idx mem))) y)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
v.AddArg(v0)
v1 := b.NewValue0(v.Pos, OpS390XSLWconst, v.Type)
v1.AuxInt = j0
- v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, types.Int16)
+ v2 := b.NewValue0(v.Pos, OpS390XMOVHZreg, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpS390XMOVHBRloadidx, typ.Int16)
v3.AuxInt = i0
v3.Aux = s
v3.AddArg(p)
}
func rewriteValueS390X_OpStore_0(v *Value) bool {
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8 && is64BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)
// result: (FMOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && is64BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 8 && is64BitFloat(val.Type)) {
break
}
v.reset(OpS390XFMOVDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4 && is32BitFloat(val.Type)
+ // cond: t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)
// result: (FMOVSstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4 && is32BitFloat(val.Type)) {
+ if !(t.(*types.Type).Size() == 4 && is32BitFloat(val.Type)) {
break
}
v.reset(OpS390XFMOVSstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 8
+ // cond: t.(*types.Type).Size() == 8
// result: (MOVDstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8) {
+ if !(t.(*types.Type).Size() == 8) {
break
}
v.reset(OpS390XMOVDstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 4
+ // cond: t.(*types.Type).Size() == 4
// result: (MOVWstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 4) {
+ if !(t.(*types.Type).Size() == 4) {
break
}
v.reset(OpS390XMOVWstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 2
+ // cond: t.(*types.Type).Size() == 2
// result: (MOVHstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 2) {
+ if !(t.(*types.Type).Size() == 2) {
break
}
v.reset(OpS390XMOVHstore)
return true
}
// match: (Store {t} ptr val mem)
- // cond: t.(Type).Size() == 1
+ // cond: t.(*types.Type).Size() == 1
// result: (MOVBstore ptr val mem)
for {
t := v.Aux
ptr := v.Args[0]
val := v.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 1) {
+ if !(t.(*types.Type).Size() == 1) {
break
}
v.reset(OpS390XMOVBstore)
v.reset(OpS390XMOVBstoreconst)
v.AuxInt = makeValAndOff(0, 2)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpS390XMOVHstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVHstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpS390XMOVBstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpS390XMOVHstoreconst)
v.AuxInt = makeValAndOff(0, 4)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
v.reset(OpS390XMOVWstoreconst)
v.AuxInt = makeValAndOff(0, 3)
v.AddArg(destptr)
- v0 := b.NewValue0(v.Pos, OpS390XMOVWstoreconst, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpS390XMOVWstoreconst, types.TypeMem)
v0.AuxInt = 0
v0.AddArg(destptr)
v0.AddArg(mem)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockS390XEQ:
// match: (EQ (InvertFlags cmp) yes no)
}
// match: (If cond yes no)
// cond:
- // result: (NE (CMPWconst [0] (MOVBZreg <types.Bool> cond)) yes no)
+ // result: (NE (CMPWconst [0] (MOVBZreg <typ.Bool> cond)) yes no)
for {
v := b.Control
_ = v
cond := b.Control
b.Kind = BlockS390XNE
- v0 := b.NewValue0(v.Pos, OpS390XCMPWconst, TypeFlags)
+ v0 := b.NewValue0(v.Pos, OpS390XCMPWconst, types.TypeFlags)
v0.AuxInt = 0
- v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, types.Bool)
+ v1 := b.NewValue0(v.Pos, OpS390XMOVBZreg, typ.Bool)
v1.AddArg(cond)
v0.AddArg(v1)
b.SetControl(v0)
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValuedec(v *Value) bool {
switch v.Op {
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Load <t> ptr mem)
// cond: t.IsComplex() && t.Size() == 8
- // result: (ComplexMake (Load <types.Float32> ptr mem) (Load <types.Float32> (OffPtr <types.Float32Ptr> [4] ptr) mem) )
+ // result: (ComplexMake (Load <typ.Float32> ptr mem) (Load <typ.Float32> (OffPtr <typ.Float32Ptr> [4] ptr) mem) )
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpComplexMake)
- v0 := b.NewValue0(v.Pos, OpLoad, types.Float32)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.Float32)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.Float32)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.Float32Ptr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.Float32)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.Float32Ptr)
v2.AuxInt = 4
v2.AddArg(ptr)
v1.AddArg(v2)
}
// match: (Load <t> ptr mem)
// cond: t.IsComplex() && t.Size() == 16
- // result: (ComplexMake (Load <types.Float64> ptr mem) (Load <types.Float64> (OffPtr <types.Float64Ptr> [8] ptr) mem) )
+ // result: (ComplexMake (Load <typ.Float64> ptr mem) (Load <typ.Float64> (OffPtr <typ.Float64Ptr> [8] ptr) mem) )
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpComplexMake)
- v0 := b.NewValue0(v.Pos, OpLoad, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.Float64)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.Float64)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.Float64Ptr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.Float64)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.Float64Ptr)
v2.AuxInt = 8
v2.AddArg(ptr)
v1.AddArg(v2)
}
// match: (Load <t> ptr mem)
// cond: t.IsString()
- // result: (StringMake (Load <types.BytePtr> ptr mem) (Load <types.Int> (OffPtr <types.IntPtr> [config.PtrSize] ptr) mem))
+ // result: (StringMake (Load <typ.BytePtr> ptr mem) (Load <typ.Int> (OffPtr <typ.IntPtr> [config.PtrSize] ptr) mem))
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpStringMake)
- v0 := b.NewValue0(v.Pos, OpLoad, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.BytePtr)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.Int)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.IntPtr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.Int)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.IntPtr)
v2.AuxInt = config.PtrSize
v2.AddArg(ptr)
v1.AddArg(v2)
}
// match: (Load <t> ptr mem)
// cond: t.IsSlice()
- // result: (SliceMake (Load <t.ElemType().PtrTo()> ptr mem) (Load <types.Int> (OffPtr <types.IntPtr> [config.PtrSize] ptr) mem) (Load <types.Int> (OffPtr <types.IntPtr> [2*config.PtrSize] ptr) mem))
+ // result: (SliceMake (Load <t.ElemType().PtrTo()> ptr mem) (Load <typ.Int> (OffPtr <typ.IntPtr> [config.PtrSize] ptr) mem) (Load <typ.Int> (OffPtr <typ.IntPtr> [2*config.PtrSize] ptr) mem))
for {
t := v.Type
ptr := v.Args[0]
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.Int)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.IntPtr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.Int)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.IntPtr)
v2.AuxInt = config.PtrSize
v2.AddArg(ptr)
v1.AddArg(v2)
v1.AddArg(mem)
v.AddArg(v1)
- v3 := b.NewValue0(v.Pos, OpLoad, types.Int)
- v4 := b.NewValue0(v.Pos, OpOffPtr, types.IntPtr)
+ v3 := b.NewValue0(v.Pos, OpLoad, typ.Int)
+ v4 := b.NewValue0(v.Pos, OpOffPtr, typ.IntPtr)
v4.AuxInt = 2 * config.PtrSize
v4.AddArg(ptr)
v3.AddArg(v4)
}
// match: (Load <t> ptr mem)
// cond: t.IsInterface()
- // result: (IMake (Load <types.BytePtr> ptr mem) (Load <types.BytePtr> (OffPtr <types.BytePtrPtr> [config.PtrSize] ptr) mem))
+ // result: (IMake (Load <typ.BytePtr> ptr mem) (Load <typ.BytePtr> (OffPtr <typ.BytePtrPtr> [config.PtrSize] ptr) mem))
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpIMake)
- v0 := b.NewValue0(v.Pos, OpLoad, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.BytePtr)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.BytePtr)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.BytePtrPtr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.BytePtr)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.BytePtrPtr)
v2.AuxInt = config.PtrSize
v2.AddArg(ptr)
v1.AddArg(v2)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Store {t} dst (ComplexMake real imag) mem)
- // cond: t.(Type).Size() == 8
- // result: (Store {types.Float32} (OffPtr <types.Float32Ptr> [4] dst) imag (Store {types.Float32} dst real mem))
+ // cond: t.(*types.Type).Size() == 8
+ // result: (Store {typ.Float32} (OffPtr <typ.Float32Ptr> [4] dst) imag (Store {typ.Float32} dst real mem))
for {
t := v.Aux
dst := v.Args[0]
real := v_1.Args[0]
imag := v_1.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8) {
+ if !(t.(*types.Type).Size() == 8) {
break
}
v.reset(OpStore)
- v.Aux = types.Float32
- v0 := b.NewValue0(v.Pos, OpOffPtr, types.Float32Ptr)
+ v.Aux = typ.Float32
+ v0 := b.NewValue0(v.Pos, OpOffPtr, typ.Float32Ptr)
v0.AuxInt = 4
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(imag)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
- v1.Aux = types.Float32
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
+ v1.Aux = typ.Float32
v1.AddArg(dst)
v1.AddArg(real)
v1.AddArg(mem)
return true
}
// match: (Store {t} dst (ComplexMake real imag) mem)
- // cond: t.(Type).Size() == 16
- // result: (Store {types.Float64} (OffPtr <types.Float64Ptr> [8] dst) imag (Store {types.Float64} dst real mem))
+ // cond: t.(*types.Type).Size() == 16
+ // result: (Store {typ.Float64} (OffPtr <typ.Float64Ptr> [8] dst) imag (Store {typ.Float64} dst real mem))
for {
t := v.Aux
dst := v.Args[0]
real := v_1.Args[0]
imag := v_1.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 16) {
+ if !(t.(*types.Type).Size() == 16) {
break
}
v.reset(OpStore)
- v.Aux = types.Float64
- v0 := b.NewValue0(v.Pos, OpOffPtr, types.Float64Ptr)
+ v.Aux = typ.Float64
+ v0 := b.NewValue0(v.Pos, OpOffPtr, typ.Float64Ptr)
v0.AuxInt = 8
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(imag)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
- v1.Aux = types.Float64
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
+ v1.Aux = typ.Float64
v1.AddArg(dst)
v1.AddArg(real)
v1.AddArg(mem)
}
// match: (Store dst (StringMake ptr len) mem)
// cond:
- // result: (Store {types.Int} (OffPtr <types.IntPtr> [config.PtrSize] dst) len (Store {types.BytePtr} dst ptr mem))
+ // result: (Store {typ.Int} (OffPtr <typ.IntPtr> [config.PtrSize] dst) len (Store {typ.BytePtr} dst ptr mem))
for {
dst := v.Args[0]
v_1 := v.Args[1]
len := v_1.Args[1]
mem := v.Args[2]
v.reset(OpStore)
- v.Aux = types.Int
- v0 := b.NewValue0(v.Pos, OpOffPtr, types.IntPtr)
+ v.Aux = typ.Int
+ v0 := b.NewValue0(v.Pos, OpOffPtr, typ.IntPtr)
v0.AuxInt = config.PtrSize
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(len)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
- v1.Aux = types.BytePtr
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
+ v1.Aux = typ.BytePtr
v1.AddArg(dst)
v1.AddArg(ptr)
v1.AddArg(mem)
}
// match: (Store dst (SliceMake ptr len cap) mem)
// cond:
- // result: (Store {types.Int} (OffPtr <types.IntPtr> [2*config.PtrSize] dst) cap (Store {types.Int} (OffPtr <types.IntPtr> [config.PtrSize] dst) len (Store {types.BytePtr} dst ptr mem)))
+ // result: (Store {typ.Int} (OffPtr <typ.IntPtr> [2*config.PtrSize] dst) cap (Store {typ.Int} (OffPtr <typ.IntPtr> [config.PtrSize] dst) len (Store {typ.BytePtr} dst ptr mem)))
for {
dst := v.Args[0]
v_1 := v.Args[1]
cap := v_1.Args[2]
mem := v.Args[2]
v.reset(OpStore)
- v.Aux = types.Int
- v0 := b.NewValue0(v.Pos, OpOffPtr, types.IntPtr)
+ v.Aux = typ.Int
+ v0 := b.NewValue0(v.Pos, OpOffPtr, typ.IntPtr)
v0.AuxInt = 2 * config.PtrSize
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(cap)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
- v1.Aux = types.Int
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.IntPtr)
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
+ v1.Aux = typ.Int
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.IntPtr)
v2.AuxInt = config.PtrSize
v2.AddArg(dst)
v1.AddArg(v2)
v1.AddArg(len)
- v3 := b.NewValue0(v.Pos, OpStore, TypeMem)
- v3.Aux = types.BytePtr
+ v3 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
+ v3.Aux = typ.BytePtr
v3.AddArg(dst)
v3.AddArg(ptr)
v3.AddArg(mem)
}
// match: (Store dst (IMake itab data) mem)
// cond:
- // result: (Store {types.BytePtr} (OffPtr <types.BytePtrPtr> [config.PtrSize] dst) data (Store {types.Uintptr} dst itab mem))
+ // result: (Store {typ.BytePtr} (OffPtr <typ.BytePtrPtr> [config.PtrSize] dst) data (Store {typ.Uintptr} dst itab mem))
for {
dst := v.Args[0]
v_1 := v.Args[1]
data := v_1.Args[1]
mem := v.Args[2]
v.reset(OpStore)
- v.Aux = types.BytePtr
- v0 := b.NewValue0(v.Pos, OpOffPtr, types.BytePtrPtr)
+ v.Aux = typ.BytePtr
+ v0 := b.NewValue0(v.Pos, OpOffPtr, typ.BytePtrPtr)
v0.AuxInt = config.PtrSize
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(data)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
- v1.Aux = types.Uintptr
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
+ v1.Aux = typ.Uintptr
v1.AddArg(dst)
v1.AddArg(itab)
v1.AddArg(mem)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
}
return false
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValuedec64(v *Value) bool {
switch v.Op {
func rewriteValuedec64_OpAdd64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Add64 x y)
// cond:
- // result: (Int64Make (Add32withcarry <types.Int32> (Int64Hi x) (Int64Hi y) (Select1 <TypeFlags> (Add32carry (Int64Lo x) (Int64Lo y)))) (Select0 <types.UInt32> (Add32carry (Int64Lo x) (Int64Lo y))))
+ // result: (Int64Make (Add32withcarry <typ.Int32> (Int64Hi x) (Int64Hi y) (Select1 <types.TypeFlags> (Add32carry (Int64Lo x) (Int64Lo y)))) (Select0 <typ.UInt32> (Add32carry (Int64Lo x) (Int64Lo y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpAdd32withcarry, types.Int32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAdd32withcarry, typ.Int32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSelect1, TypeFlags)
- v4 := b.NewValue0(v.Pos, OpAdd32carry, MakeTuple(types.UInt32, TypeFlags))
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpAdd32carry, types.NewTuple(typ.UInt32, types.TypeFlags))
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
v0.AddArg(v3)
v.AddArg(v0)
- v7 := b.NewValue0(v.Pos, OpSelect0, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpAdd32carry, MakeTuple(types.UInt32, TypeFlags))
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpSelect0, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpAdd32carry, types.NewTuple(typ.UInt32, types.TypeFlags))
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(x)
v8.AddArg(v9)
- v10 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v10 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v10.AddArg(y)
v8.AddArg(v10)
v7.AddArg(v8)
func rewriteValuedec64_OpAnd64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (And64 x y)
// cond:
- // result: (Int64Make (And32 <types.UInt32> (Int64Hi x) (Int64Hi y)) (And32 <types.UInt32> (Int64Lo x) (Int64Lo y)))
+ // result: (Int64Make (And32 <typ.UInt32> (Int64Hi x) (Int64Hi y)) (And32 <typ.UInt32> (Int64Lo x) (Int64Lo y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpAnd32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAnd32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAnd32, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAnd32, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(y)
v3.AddArg(v5)
v.AddArg(v3)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Arg {n} [off])
// cond: is64BitInt(v.Type) && !config.BigEndian && v.Type.IsSigned()
- // result: (Int64Make (Arg <types.Int32> {n} [off+4]) (Arg <types.UInt32> {n} [off]))
+ // result: (Int64Make (Arg <typ.Int32> {n} [off+4]) (Arg <typ.UInt32> {n} [off]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpArg, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.Int32)
v0.AuxInt = off + 4
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.UInt32)
v1.AuxInt = off
v1.Aux = n
v.AddArg(v1)
}
// match: (Arg {n} [off])
// cond: is64BitInt(v.Type) && !config.BigEndian && !v.Type.IsSigned()
- // result: (Int64Make (Arg <types.UInt32> {n} [off+4]) (Arg <types.UInt32> {n} [off]))
+ // result: (Int64Make (Arg <typ.UInt32> {n} [off+4]) (Arg <typ.UInt32> {n} [off]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpArg, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.UInt32)
v0.AuxInt = off + 4
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.UInt32)
v1.AuxInt = off
v1.Aux = n
v.AddArg(v1)
}
// match: (Arg {n} [off])
// cond: is64BitInt(v.Type) && config.BigEndian && v.Type.IsSigned()
- // result: (Int64Make (Arg <types.Int32> {n} [off]) (Arg <types.UInt32> {n} [off+4]))
+ // result: (Int64Make (Arg <typ.Int32> {n} [off]) (Arg <typ.UInt32> {n} [off+4]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpArg, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.Int32)
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.UInt32)
v1.AuxInt = off + 4
v1.Aux = n
v.AddArg(v1)
}
// match: (Arg {n} [off])
// cond: is64BitInt(v.Type) && config.BigEndian && !v.Type.IsSigned()
- // result: (Int64Make (Arg <types.UInt32> {n} [off]) (Arg <types.UInt32> {n} [off+4]))
+ // result: (Int64Make (Arg <typ.UInt32> {n} [off]) (Arg <typ.UInt32> {n} [off+4]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpArg, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.UInt32)
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.UInt32)
v1.AuxInt = off + 4
v1.Aux = n
v.AddArg(v1)
func rewriteValuedec64_OpBitLen64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (BitLen64 x)
// cond:
- // result: (Add32 <types.Int> (BitLen32 <types.Int> (Int64Hi x)) (BitLen32 <types.Int> (Or32 <types.UInt32> (Int64Lo x) (Zeromask (Int64Hi x)))))
+ // result: (Add32 <typ.Int> (BitLen32 <typ.Int> (Int64Hi x)) (BitLen32 <typ.Int> (Or32 <typ.UInt32> (Int64Lo x) (Zeromask (Int64Hi x)))))
for {
x := v.Args[0]
v.reset(OpAdd32)
- v.Type = types.Int
- v0 := b.NewValue0(v.Pos, OpBitLen32, types.Int)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v.Type = typ.Int
+ v0 := b.NewValue0(v.Pos, OpBitLen32, typ.Int)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpBitLen32, types.Int)
- v3 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpBitLen32, typ.Int)
+ v3 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(x)
v5.AddArg(v6)
v3.AddArg(v5)
func rewriteValuedec64_OpBswap64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Bswap64 x)
// cond:
- // result: (Int64Make (Bswap32 <types.UInt32> (Int64Lo x)) (Bswap32 <types.UInt32> (Int64Hi x)))
+ // result: (Int64Make (Bswap32 <typ.UInt32> (Int64Lo x)) (Bswap32 <typ.UInt32> (Int64Hi x)))
for {
x := v.Args[0]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpBswap32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpBswap32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpBswap32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpBswap32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v3.AddArg(x)
v2.AddArg(v3)
v.AddArg(v2)
func rewriteValuedec64_OpCom64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Com64 x)
// cond:
- // result: (Int64Make (Com32 <types.UInt32> (Int64Hi x)) (Com32 <types.UInt32> (Int64Lo x)))
+ // result: (Int64Make (Com32 <typ.UInt32> (Int64Hi x)) (Com32 <typ.UInt32> (Int64Lo x)))
for {
x := v.Args[0]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpCom32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpCom32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpCom32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpCom32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v3.AddArg(x)
v2.AddArg(v3)
v.AddArg(v2)
func rewriteValuedec64_OpConst64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Const64 <t> [c])
// cond: t.IsSigned()
- // result: (Int64Make (Const32 <types.Int32> [c>>32]) (Const32 <types.UInt32> [int64(int32(c))]))
+ // result: (Int64Make (Const32 <typ.Int32> [c>>32]) (Const32 <typ.UInt32> [int64(int32(c))]))
for {
t := v.Type
c := v.AuxInt
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpConst32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpConst32, typ.Int32)
v0.AuxInt = c >> 32
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v1.AuxInt = int64(int32(c))
v.AddArg(v1)
return true
}
// match: (Const64 <t> [c])
// cond: !t.IsSigned()
- // result: (Int64Make (Const32 <types.UInt32> [c>>32]) (Const32 <types.UInt32> [int64(int32(c))]))
+ // result: (Int64Make (Const32 <typ.UInt32> [c>>32]) (Const32 <typ.UInt32> [int64(int32(c))]))
for {
t := v.Type
c := v.AuxInt
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v0.AuxInt = c >> 32
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v1.AuxInt = int64(int32(c))
v.AddArg(v1)
return true
func rewriteValuedec64_OpCtz64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Ctz64 x)
// cond:
- // result: (Add32 <types.UInt32> (Ctz32 <types.UInt32> (Int64Lo x)) (And32 <types.UInt32> (Com32 <types.UInt32> (Zeromask (Int64Lo x))) (Ctz32 <types.UInt32> (Int64Hi x))))
+ // result: (Add32 <typ.UInt32> (Ctz32 <typ.UInt32> (Int64Lo x)) (And32 <typ.UInt32> (Com32 <typ.UInt32> (Zeromask (Int64Lo x))) (Ctz32 <typ.UInt32> (Int64Hi x))))
for {
x := v.Args[0]
v.reset(OpAdd32)
- v.Type = types.UInt32
- v0 := b.NewValue0(v.Pos, OpCtz32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v.Type = typ.UInt32
+ v0 := b.NewValue0(v.Pos, OpCtz32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpAnd32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpCom32, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpAnd32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpCom32, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
v3.AddArg(v4)
v2.AddArg(v3)
- v6 := b.NewValue0(v.Pos, OpCtz32, types.UInt32)
- v7 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpCtz32, typ.UInt32)
+ v7 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v7.AddArg(x)
v6.AddArg(v7)
v2.AddArg(v6)
func rewriteValuedec64_OpEq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Eq64 x y)
// cond:
// result: (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Eq32 (Int64Lo x) (Int64Lo y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpAndB)
- v0 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v4 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(y)
v3.AddArg(v5)
v.AddArg(v3)
func rewriteValuedec64_OpGeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64 x y)
// cond:
// result: (OrB (Greater32 (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Geq32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpGreater32, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpGreater32, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpGeq32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpGeq32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpGeq64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Geq64U x y)
// cond:
// result: (OrB (Greater32U (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Geq32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpGreater32U, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpGreater32U, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpGeq32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpGeq32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpGreater64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater64 x y)
// cond:
// result: (OrB (Greater32 (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Greater32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpGreater32, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpGreater32, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpGreater32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpGreater32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpGreater64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Greater64U x y)
// cond:
// result: (OrB (Greater32U (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Greater32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpGreater32U, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpGreater32U, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpGreater32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpGreater32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpLeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64 x y)
// cond:
// result: (OrB (Less32 (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Leq32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpLess32, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpLess32, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpLeq32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpLeq32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpLeq64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Leq64U x y)
// cond:
// result: (OrB (Less32U (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Leq32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpLess32U, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpLess32U, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpLeq32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpLeq32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpLess64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less64 x y)
// cond:
// result: (OrB (Less32 (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Less32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpLess32, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpLess32, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpLess32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpLess32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
func rewriteValuedec64_OpLess64U_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Less64U x y)
// cond:
// result: (OrB (Less32U (Int64Hi x) (Int64Hi y)) (AndB (Eq32 (Int64Hi x) (Int64Hi y)) (Less32U (Int64Lo x) (Int64Lo y))))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpLess32U, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpLess32U, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpAndB, types.Bool)
- v4 := b.NewValue0(v.Pos, OpEq32, types.Bool)
- v5 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpAndB, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpEq32, typ.Bool)
+ v5 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
- v7 := b.NewValue0(v.Pos, OpLess32U, types.Bool)
- v8 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpLess32U, typ.Bool)
+ v8 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v8.AddArg(x)
v7.AddArg(v8)
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(y)
v7.AddArg(v9)
v3.AddArg(v7)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Load <t> ptr mem)
// cond: is64BitInt(t) && !config.BigEndian && t.IsSigned()
- // result: (Int64Make (Load <types.Int32> (OffPtr <types.Int32Ptr> [4] ptr) mem) (Load <types.UInt32> ptr mem))
+ // result: (Int64Make (Load <typ.Int32> (OffPtr <typ.Int32Ptr> [4] ptr) mem) (Load <typ.UInt32> ptr mem))
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpLoad, types.Int32)
- v1 := b.NewValue0(v.Pos, OpOffPtr, types.Int32Ptr)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.Int32)
+ v1 := b.NewValue0(v.Pos, OpOffPtr, typ.Int32Ptr)
v1.AuxInt = 4
v1.AddArg(ptr)
v0.AddArg(v1)
v0.AddArg(mem)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpLoad, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpLoad, typ.UInt32)
v2.AddArg(ptr)
v2.AddArg(mem)
v.AddArg(v2)
}
// match: (Load <t> ptr mem)
// cond: is64BitInt(t) && !config.BigEndian && !t.IsSigned()
- // result: (Int64Make (Load <types.UInt32> (OffPtr <types.UInt32Ptr> [4] ptr) mem) (Load <types.UInt32> ptr mem))
+ // result: (Int64Make (Load <typ.UInt32> (OffPtr <typ.UInt32Ptr> [4] ptr) mem) (Load <typ.UInt32> ptr mem))
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpLoad, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpOffPtr, types.UInt32Ptr)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOffPtr, typ.UInt32Ptr)
v1.AuxInt = 4
v1.AddArg(ptr)
v0.AddArg(v1)
v0.AddArg(mem)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpLoad, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpLoad, typ.UInt32)
v2.AddArg(ptr)
v2.AddArg(mem)
v.AddArg(v2)
}
// match: (Load <t> ptr mem)
// cond: is64BitInt(t) && config.BigEndian && t.IsSigned()
- // result: (Int64Make (Load <types.Int32> ptr mem) (Load <types.UInt32> (OffPtr <types.UInt32Ptr> [4] ptr) mem))
+ // result: (Int64Make (Load <typ.Int32> ptr mem) (Load <typ.UInt32> (OffPtr <typ.UInt32Ptr> [4] ptr) mem))
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpLoad, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.Int32)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.UInt32Ptr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.UInt32Ptr)
v2.AuxInt = 4
v2.AddArg(ptr)
v1.AddArg(v2)
}
// match: (Load <t> ptr mem)
// cond: is64BitInt(t) && config.BigEndian && !t.IsSigned()
- // result: (Int64Make (Load <types.UInt32> ptr mem) (Load <types.UInt32> (OffPtr <types.UInt32Ptr> [4] ptr) mem))
+ // result: (Int64Make (Load <typ.UInt32> ptr mem) (Load <typ.UInt32> (OffPtr <typ.UInt32Ptr> [4] ptr) mem))
for {
t := v.Type
ptr := v.Args[0]
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpLoad, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpLoad, typ.UInt32)
v0.AddArg(ptr)
v0.AddArg(mem)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpLoad, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOffPtr, types.UInt32Ptr)
+ v1 := b.NewValue0(v.Pos, OpLoad, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOffPtr, typ.UInt32Ptr)
v2.AuxInt = 4
v2.AddArg(ptr)
v1.AddArg(v2)
func rewriteValuedec64_OpLsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const32 [0])
}
// match: (Lsh16x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Lsh16x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Lsh16x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpLsh16x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpLsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const32 [0])
}
// match: (Lsh32x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Lsh32x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Lsh32x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpLsh32x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpLsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x16 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Or32 <types.UInt32> (Or32 <types.UInt32> (Lsh32x16 <types.UInt32> hi s) (Rsh32Ux16 <types.UInt32> lo (Sub16 <types.UInt16> (Const16 <types.UInt16> [32]) s))) (Lsh32x16 <types.UInt32> lo (Sub16 <types.UInt16> s (Const16 <types.UInt16> [32])))) (Lsh32x16 <types.UInt32> lo s))
+ // result: (Int64Make (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Lsh32x16 <typ.UInt32> hi s) (Rsh32Ux16 <typ.UInt32> lo (Sub16 <typ.UInt16> (Const16 <typ.UInt16> [32]) s))) (Lsh32x16 <typ.UInt32> lo (Sub16 <typ.UInt16> s (Const16 <typ.UInt16> [32])))) (Lsh32x16 <typ.UInt32> lo s))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpLsh32x16, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpLsh32x16, typ.UInt32)
v2.AddArg(hi)
v2.AddArg(s)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux16, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux16, typ.UInt32)
v3.AddArg(lo)
- v4 := b.NewValue0(v.Pos, OpSub16, types.UInt16)
- v5 := b.NewValue0(v.Pos, OpConst16, types.UInt16)
+ v4 := b.NewValue0(v.Pos, OpSub16, typ.UInt16)
+ v5 := b.NewValue0(v.Pos, OpConst16, typ.UInt16)
v5.AuxInt = 32
v4.AddArg(v5)
v4.AddArg(s)
v3.AddArg(v4)
v1.AddArg(v3)
v0.AddArg(v1)
- v6 := b.NewValue0(v.Pos, OpLsh32x16, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpLsh32x16, typ.UInt32)
v6.AddArg(lo)
- v7 := b.NewValue0(v.Pos, OpSub16, types.UInt16)
+ v7 := b.NewValue0(v.Pos, OpSub16, typ.UInt16)
v7.AddArg(s)
- v8 := b.NewValue0(v.Pos, OpConst16, types.UInt16)
+ v8 := b.NewValue0(v.Pos, OpConst16, typ.UInt16)
v8.AuxInt = 32
v7.AddArg(v8)
v6.AddArg(v7)
v0.AddArg(v6)
v.AddArg(v0)
- v9 := b.NewValue0(v.Pos, OpLsh32x16, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpLsh32x16, typ.UInt32)
v9.AddArg(lo)
v9.AddArg(s)
v.AddArg(v9)
func rewriteValuedec64_OpLsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x32 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Or32 <types.UInt32> (Or32 <types.UInt32> (Lsh32x32 <types.UInt32> hi s) (Rsh32Ux32 <types.UInt32> lo (Sub32 <types.UInt32> (Const32 <types.UInt32> [32]) s))) (Lsh32x32 <types.UInt32> lo (Sub32 <types.UInt32> s (Const32 <types.UInt32> [32])))) (Lsh32x32 <types.UInt32> lo s))
+ // result: (Int64Make (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Lsh32x32 <typ.UInt32> hi s) (Rsh32Ux32 <typ.UInt32> lo (Sub32 <typ.UInt32> (Const32 <typ.UInt32> [32]) s))) (Lsh32x32 <typ.UInt32> lo (Sub32 <typ.UInt32> s (Const32 <typ.UInt32> [32])))) (Lsh32x32 <typ.UInt32> lo s))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpLsh32x32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpLsh32x32, typ.UInt32)
v2.AddArg(hi)
v2.AddArg(s)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux32, typ.UInt32)
v3.AddArg(lo)
- v4 := b.NewValue0(v.Pos, OpSub32, types.UInt32)
- v5 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpSub32, typ.UInt32)
+ v5 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v5.AuxInt = 32
v4.AddArg(v5)
v4.AddArg(s)
v3.AddArg(v4)
v1.AddArg(v3)
v0.AddArg(v1)
- v6 := b.NewValue0(v.Pos, OpLsh32x32, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpLsh32x32, typ.UInt32)
v6.AddArg(lo)
- v7 := b.NewValue0(v.Pos, OpSub32, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpSub32, typ.UInt32)
v7.AddArg(s)
- v8 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v8 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v8.AuxInt = 32
v7.AddArg(v8)
v6.AddArg(v7)
v0.AddArg(v6)
v.AddArg(v0)
- v9 := b.NewValue0(v.Pos, OpLsh32x32, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpLsh32x32, typ.UInt32)
v9.AddArg(lo)
v9.AddArg(s)
v.AddArg(v9)
func rewriteValuedec64_OpLsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const64 [0])
}
// match: (Lsh64x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Lsh64x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Lsh64x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpLsh64x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpLsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x8 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Or32 <types.UInt32> (Or32 <types.UInt32> (Lsh32x8 <types.UInt32> hi s) (Rsh32Ux8 <types.UInt32> lo (Sub8 <types.UInt8> (Const8 <types.UInt8> [32]) s))) (Lsh32x8 <types.UInt32> lo (Sub8 <types.UInt8> s (Const8 <types.UInt8> [32])))) (Lsh32x8 <types.UInt32> lo s))
+ // result: (Int64Make (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Lsh32x8 <typ.UInt32> hi s) (Rsh32Ux8 <typ.UInt32> lo (Sub8 <typ.UInt8> (Const8 <typ.UInt8> [32]) s))) (Lsh32x8 <typ.UInt32> lo (Sub8 <typ.UInt8> s (Const8 <typ.UInt8> [32])))) (Lsh32x8 <typ.UInt32> lo s))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpLsh32x8, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpLsh32x8, typ.UInt32)
v2.AddArg(hi)
v2.AddArg(s)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux8, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux8, typ.UInt32)
v3.AddArg(lo)
- v4 := b.NewValue0(v.Pos, OpSub8, types.UInt8)
- v5 := b.NewValue0(v.Pos, OpConst8, types.UInt8)
+ v4 := b.NewValue0(v.Pos, OpSub8, typ.UInt8)
+ v5 := b.NewValue0(v.Pos, OpConst8, typ.UInt8)
v5.AuxInt = 32
v4.AddArg(v5)
v4.AddArg(s)
v3.AddArg(v4)
v1.AddArg(v3)
v0.AddArg(v1)
- v6 := b.NewValue0(v.Pos, OpLsh32x8, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpLsh32x8, typ.UInt32)
v6.AddArg(lo)
- v7 := b.NewValue0(v.Pos, OpSub8, types.UInt8)
+ v7 := b.NewValue0(v.Pos, OpSub8, typ.UInt8)
v7.AddArg(s)
- v8 := b.NewValue0(v.Pos, OpConst8, types.UInt8)
+ v8 := b.NewValue0(v.Pos, OpConst8, typ.UInt8)
v8.AuxInt = 32
v7.AddArg(v8)
v6.AddArg(v7)
v0.AddArg(v6)
v.AddArg(v0)
- v9 := b.NewValue0(v.Pos, OpLsh32x8, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpLsh32x8, typ.UInt32)
v9.AddArg(lo)
v9.AddArg(s)
v.AddArg(v9)
func rewriteValuedec64_OpLsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const32 [0])
}
// match: (Lsh8x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Lsh8x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Lsh8x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpLsh8x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpMul64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul64 x y)
// cond:
- // result: (Int64Make (Add32 <types.UInt32> (Mul32 <types.UInt32> (Int64Lo x) (Int64Hi y)) (Add32 <types.UInt32> (Mul32 <types.UInt32> (Int64Hi x) (Int64Lo y)) (Select0 <types.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y))))) (Select1 <types.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y))))
+ // result: (Int64Make (Add32 <typ.UInt32> (Mul32 <typ.UInt32> (Int64Lo x) (Int64Hi y)) (Add32 <typ.UInt32> (Mul32 <typ.UInt32> (Int64Hi x) (Int64Lo y)) (Select0 <typ.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y))))) (Select1 <typ.UInt32> (Mul32uhilo (Int64Lo x) (Int64Lo y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpAdd32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpAdd32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v2.AddArg(x)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v3.AddArg(y)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpAdd32, types.UInt32)
- v5 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v6 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpAdd32, typ.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v6.AddArg(x)
v5.AddArg(v6)
- v7 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v7.AddArg(y)
v5.AddArg(v7)
v4.AddArg(v5)
- v8 := b.NewValue0(v.Pos, OpSelect0, types.UInt32)
- v9 := b.NewValue0(v.Pos, OpMul32uhilo, MakeTuple(types.UInt32, types.UInt32))
- v10 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v8 := b.NewValue0(v.Pos, OpSelect0, typ.UInt32)
+ v9 := b.NewValue0(v.Pos, OpMul32uhilo, types.NewTuple(typ.UInt32, typ.UInt32))
+ v10 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v10.AddArg(x)
v9.AddArg(v10)
- v11 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v11 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v11.AddArg(y)
v9.AddArg(v11)
v8.AddArg(v9)
v4.AddArg(v8)
v0.AddArg(v4)
v.AddArg(v0)
- v12 := b.NewValue0(v.Pos, OpSelect1, types.UInt32)
- v13 := b.NewValue0(v.Pos, OpMul32uhilo, MakeTuple(types.UInt32, types.UInt32))
- v14 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v12 := b.NewValue0(v.Pos, OpSelect1, typ.UInt32)
+ v13 := b.NewValue0(v.Pos, OpMul32uhilo, types.NewTuple(typ.UInt32, typ.UInt32))
+ v14 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v14.AddArg(x)
v13.AddArg(v14)
- v15 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v15 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v15.AddArg(y)
v13.AddArg(v15)
v12.AddArg(v13)
func rewriteValuedec64_OpNeq64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Neq64 x y)
// cond:
// result: (OrB (Neq32 (Int64Hi x) (Int64Hi y)) (Neq32 (Int64Lo x) (Int64Lo y)))
x := v.Args[0]
y := v.Args[1]
v.reset(OpOrB)
- v0 := b.NewValue0(v.Pos, OpNeq32, types.Bool)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpNeq32, typ.Bool)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpNeq32, types.Bool)
- v4 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpNeq32, typ.Bool)
+ v4 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(y)
v3.AddArg(v5)
v.AddArg(v3)
func rewriteValuedec64_OpOr64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Or64 x y)
// cond:
- // result: (Int64Make (Or32 <types.UInt32> (Int64Hi x) (Int64Hi y)) (Or32 <types.UInt32> (Int64Lo x) (Int64Lo y)))
+ // result: (Int64Make (Or32 <typ.UInt32> (Int64Hi x) (Int64Hi y)) (Or32 <typ.UInt32> (Int64Lo x) (Int64Lo y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(y)
v3.AddArg(v5)
v.AddArg(v3)
func rewriteValuedec64_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const32 [0])
}
// match: (Rsh16Ux64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh16Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh16Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh16Ux32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 x (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Signmask (SignExt16to32 x))
break
}
v.reset(OpSignmask)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh16x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh16x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh16x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh16x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh32Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const32 [0])
}
// match: (Rsh32Ux64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh32Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh32Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh32Ux32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x64 x (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Signmask x)
}
// match: (Rsh32x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh32x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh32x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh32x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh64Ux16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux16 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Rsh32Ux16 <types.UInt32> hi s) (Or32 <types.UInt32> (Or32 <types.UInt32> (Rsh32Ux16 <types.UInt32> lo s) (Lsh32x16 <types.UInt32> hi (Sub16 <types.UInt16> (Const16 <types.UInt16> [32]) s))) (Rsh32Ux16 <types.UInt32> hi (Sub16 <types.UInt16> s (Const16 <types.UInt16> [32])))))
+ // result: (Int64Make (Rsh32Ux16 <typ.UInt32> hi s) (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Rsh32Ux16 <typ.UInt32> lo s) (Lsh32x16 <typ.UInt32> hi (Sub16 <typ.UInt16> (Const16 <typ.UInt16> [32]) s))) (Rsh32Ux16 <typ.UInt32> hi (Sub16 <typ.UInt16> s (Const16 <typ.UInt16> [32])))))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux16, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux16, typ.UInt32)
v0.AddArg(hi)
v0.AddArg(s)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux16, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux16, typ.UInt32)
v3.AddArg(lo)
v3.AddArg(s)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpLsh32x16, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpLsh32x16, typ.UInt32)
v4.AddArg(hi)
- v5 := b.NewValue0(v.Pos, OpSub16, types.UInt16)
- v6 := b.NewValue0(v.Pos, OpConst16, types.UInt16)
+ v5 := b.NewValue0(v.Pos, OpSub16, typ.UInt16)
+ v6 := b.NewValue0(v.Pos, OpConst16, typ.UInt16)
v6.AuxInt = 32
v5.AddArg(v6)
v5.AddArg(s)
v4.AddArg(v5)
v2.AddArg(v4)
v1.AddArg(v2)
- v7 := b.NewValue0(v.Pos, OpRsh32Ux16, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpRsh32Ux16, typ.UInt32)
v7.AddArg(hi)
- v8 := b.NewValue0(v.Pos, OpSub16, types.UInt16)
+ v8 := b.NewValue0(v.Pos, OpSub16, typ.UInt16)
v8.AddArg(s)
- v9 := b.NewValue0(v.Pos, OpConst16, types.UInt16)
+ v9 := b.NewValue0(v.Pos, OpConst16, typ.UInt16)
v9.AuxInt = 32
v8.AddArg(v9)
v7.AddArg(v8)
func rewriteValuedec64_OpRsh64Ux32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux32 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Rsh32Ux32 <types.UInt32> hi s) (Or32 <types.UInt32> (Or32 <types.UInt32> (Rsh32Ux32 <types.UInt32> lo s) (Lsh32x32 <types.UInt32> hi (Sub32 <types.UInt32> (Const32 <types.UInt32> [32]) s))) (Rsh32Ux32 <types.UInt32> hi (Sub32 <types.UInt32> s (Const32 <types.UInt32> [32])))))
+ // result: (Int64Make (Rsh32Ux32 <typ.UInt32> hi s) (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Rsh32Ux32 <typ.UInt32> lo s) (Lsh32x32 <typ.UInt32> hi (Sub32 <typ.UInt32> (Const32 <typ.UInt32> [32]) s))) (Rsh32Ux32 <typ.UInt32> hi (Sub32 <typ.UInt32> s (Const32 <typ.UInt32> [32])))))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux32, typ.UInt32)
v0.AddArg(hi)
v0.AddArg(s)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux32, typ.UInt32)
v3.AddArg(lo)
v3.AddArg(s)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpLsh32x32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpLsh32x32, typ.UInt32)
v4.AddArg(hi)
- v5 := b.NewValue0(v.Pos, OpSub32, types.UInt32)
- v6 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpSub32, typ.UInt32)
+ v6 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v6.AuxInt = 32
v5.AddArg(v6)
v5.AddArg(s)
v4.AddArg(v5)
v2.AddArg(v4)
v1.AddArg(v2)
- v7 := b.NewValue0(v.Pos, OpRsh32Ux32, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpRsh32Ux32, typ.UInt32)
v7.AddArg(hi)
- v8 := b.NewValue0(v.Pos, OpSub32, types.UInt32)
+ v8 := b.NewValue0(v.Pos, OpSub32, typ.UInt32)
v8.AddArg(s)
- v9 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v9.AuxInt = 32
v8.AddArg(v9)
v7.AddArg(v8)
func rewriteValuedec64_OpRsh64Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const64 [0])
}
// match: (Rsh64Ux64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh64Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh64Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh64Ux32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh64Ux8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux8 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Rsh32Ux8 <types.UInt32> hi s) (Or32 <types.UInt32> (Or32 <types.UInt32> (Rsh32Ux8 <types.UInt32> lo s) (Lsh32x8 <types.UInt32> hi (Sub8 <types.UInt8> (Const8 <types.UInt8> [32]) s))) (Rsh32Ux8 <types.UInt32> hi (Sub8 <types.UInt8> s (Const8 <types.UInt8> [32])))))
+ // result: (Int64Make (Rsh32Ux8 <typ.UInt32> hi s) (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Rsh32Ux8 <typ.UInt32> lo s) (Lsh32x8 <typ.UInt32> hi (Sub8 <typ.UInt8> (Const8 <typ.UInt8> [32]) s))) (Rsh32Ux8 <typ.UInt32> hi (Sub8 <typ.UInt8> s (Const8 <typ.UInt8> [32])))))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux8, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux8, typ.UInt32)
v0.AddArg(hi)
v0.AddArg(s)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux8, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux8, typ.UInt32)
v3.AddArg(lo)
v3.AddArg(s)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpLsh32x8, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpLsh32x8, typ.UInt32)
v4.AddArg(hi)
- v5 := b.NewValue0(v.Pos, OpSub8, types.UInt8)
- v6 := b.NewValue0(v.Pos, OpConst8, types.UInt8)
+ v5 := b.NewValue0(v.Pos, OpSub8, typ.UInt8)
+ v6 := b.NewValue0(v.Pos, OpConst8, typ.UInt8)
v6.AuxInt = 32
v5.AddArg(v6)
v5.AddArg(s)
v4.AddArg(v5)
v2.AddArg(v4)
v1.AddArg(v2)
- v7 := b.NewValue0(v.Pos, OpRsh32Ux8, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpRsh32Ux8, typ.UInt32)
v7.AddArg(hi)
- v8 := b.NewValue0(v.Pos, OpSub8, types.UInt8)
+ v8 := b.NewValue0(v.Pos, OpSub8, typ.UInt8)
v8.AddArg(s)
- v9 := b.NewValue0(v.Pos, OpConst8, types.UInt8)
+ v9 := b.NewValue0(v.Pos, OpConst8, typ.UInt8)
v9.AuxInt = 32
v8.AddArg(v9)
v7.AddArg(v8)
func rewriteValuedec64_OpRsh64x16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x16 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Rsh32x16 <types.UInt32> hi s) (Or32 <types.UInt32> (Or32 <types.UInt32> (Rsh32Ux16 <types.UInt32> lo s) (Lsh32x16 <types.UInt32> hi (Sub16 <types.UInt16> (Const16 <types.UInt16> [32]) s))) (And32 <types.UInt32> (Rsh32x16 <types.UInt32> hi (Sub16 <types.UInt16> s (Const16 <types.UInt16> [32]))) (Zeromask (ZeroExt16to32 (Rsh16Ux32 <types.UInt16> s (Const32 <types.UInt32> [5])))))))
+ // result: (Int64Make (Rsh32x16 <typ.UInt32> hi s) (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Rsh32Ux16 <typ.UInt32> lo s) (Lsh32x16 <typ.UInt32> hi (Sub16 <typ.UInt16> (Const16 <typ.UInt16> [32]) s))) (And32 <typ.UInt32> (Rsh32x16 <typ.UInt32> hi (Sub16 <typ.UInt16> s (Const16 <typ.UInt16> [32]))) (Zeromask (ZeroExt16to32 (Rsh16Ux32 <typ.UInt16> s (Const32 <typ.UInt32> [5])))))))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpRsh32x16, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32x16, typ.UInt32)
v0.AddArg(hi)
v0.AddArg(s)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux16, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux16, typ.UInt32)
v3.AddArg(lo)
v3.AddArg(s)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpLsh32x16, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpLsh32x16, typ.UInt32)
v4.AddArg(hi)
- v5 := b.NewValue0(v.Pos, OpSub16, types.UInt16)
- v6 := b.NewValue0(v.Pos, OpConst16, types.UInt16)
+ v5 := b.NewValue0(v.Pos, OpSub16, typ.UInt16)
+ v6 := b.NewValue0(v.Pos, OpConst16, typ.UInt16)
v6.AuxInt = 32
v5.AddArg(v6)
v5.AddArg(s)
v4.AddArg(v5)
v2.AddArg(v4)
v1.AddArg(v2)
- v7 := b.NewValue0(v.Pos, OpAnd32, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpRsh32x16, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpAnd32, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpRsh32x16, typ.UInt32)
v8.AddArg(hi)
- v9 := b.NewValue0(v.Pos, OpSub16, types.UInt16)
+ v9 := b.NewValue0(v.Pos, OpSub16, typ.UInt16)
v9.AddArg(s)
- v10 := b.NewValue0(v.Pos, OpConst16, types.UInt16)
+ v10 := b.NewValue0(v.Pos, OpConst16, typ.UInt16)
v10.AuxInt = 32
v9.AddArg(v10)
v8.AddArg(v9)
v7.AddArg(v8)
- v11 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
- v12 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
- v13 := b.NewValue0(v.Pos, OpRsh16Ux32, types.UInt16)
+ v11 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
+ v12 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
+ v13 := b.NewValue0(v.Pos, OpRsh16Ux32, typ.UInt16)
v13.AddArg(s)
- v14 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v14 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v14.AuxInt = 5
v13.AddArg(v14)
v12.AddArg(v13)
func rewriteValuedec64_OpRsh64x32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x32 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Rsh32x32 <types.UInt32> hi s) (Or32 <types.UInt32> (Or32 <types.UInt32> (Rsh32Ux32 <types.UInt32> lo s) (Lsh32x32 <types.UInt32> hi (Sub32 <types.UInt32> (Const32 <types.UInt32> [32]) s))) (And32 <types.UInt32> (Rsh32x32 <types.UInt32> hi (Sub32 <types.UInt32> s (Const32 <types.UInt32> [32]))) (Zeromask (Rsh32Ux32 <types.UInt32> s (Const32 <types.UInt32> [5]))))))
+ // result: (Int64Make (Rsh32x32 <typ.UInt32> hi s) (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Rsh32Ux32 <typ.UInt32> lo s) (Lsh32x32 <typ.UInt32> hi (Sub32 <typ.UInt32> (Const32 <typ.UInt32> [32]) s))) (And32 <typ.UInt32> (Rsh32x32 <typ.UInt32> hi (Sub32 <typ.UInt32> s (Const32 <typ.UInt32> [32]))) (Zeromask (Rsh32Ux32 <typ.UInt32> s (Const32 <typ.UInt32> [5]))))))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpRsh32x32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32x32, typ.UInt32)
v0.AddArg(hi)
v0.AddArg(s)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux32, typ.UInt32)
v3.AddArg(lo)
v3.AddArg(s)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpLsh32x32, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpLsh32x32, typ.UInt32)
v4.AddArg(hi)
- v5 := b.NewValue0(v.Pos, OpSub32, types.UInt32)
- v6 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpSub32, typ.UInt32)
+ v6 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v6.AuxInt = 32
v5.AddArg(v6)
v5.AddArg(s)
v4.AddArg(v5)
v2.AddArg(v4)
v1.AddArg(v2)
- v7 := b.NewValue0(v.Pos, OpAnd32, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpRsh32x32, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpAnd32, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpRsh32x32, typ.UInt32)
v8.AddArg(hi)
- v9 := b.NewValue0(v.Pos, OpSub32, types.UInt32)
+ v9 := b.NewValue0(v.Pos, OpSub32, typ.UInt32)
v9.AddArg(s)
- v10 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v10 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v10.AuxInt = 32
v9.AddArg(v10)
v8.AddArg(v9)
v7.AddArg(v8)
- v11 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
- v12 := b.NewValue0(v.Pos, OpRsh32Ux32, types.UInt32)
+ v11 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
+ v12 := b.NewValue0(v.Pos, OpRsh32Ux32, typ.UInt32)
v12.AddArg(s)
- v13 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v13 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v13.AuxInt = 5
v12.AddArg(v13)
v11.AddArg(v12)
func rewriteValuedec64_OpRsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x64 x (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Int64Make (Signmask (Int64Hi x)) (Signmask (Int64Hi x)))
break
}
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
- v3 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
+ v3 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v3.AddArg(x)
v2.AddArg(v3)
v.AddArg(v2)
}
// match: (Rsh64x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh64x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh64x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh64x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh64x8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x8 (Int64Make hi lo) s)
// cond:
- // result: (Int64Make (Rsh32x8 <types.UInt32> hi s) (Or32 <types.UInt32> (Or32 <types.UInt32> (Rsh32Ux8 <types.UInt32> lo s) (Lsh32x8 <types.UInt32> hi (Sub8 <types.UInt8> (Const8 <types.UInt8> [32]) s))) (And32 <types.UInt32> (Rsh32x8 <types.UInt32> hi (Sub8 <types.UInt8> s (Const8 <types.UInt8> [32]))) (Zeromask (ZeroExt8to32 (Rsh8Ux32 <types.UInt8> s (Const32 <types.UInt32> [5])))))))
+ // result: (Int64Make (Rsh32x8 <typ.UInt32> hi s) (Or32 <typ.UInt32> (Or32 <typ.UInt32> (Rsh32Ux8 <typ.UInt32> lo s) (Lsh32x8 <typ.UInt32> hi (Sub8 <typ.UInt8> (Const8 <typ.UInt8> [32]) s))) (And32 <typ.UInt32> (Rsh32x8 <typ.UInt32> hi (Sub8 <typ.UInt8> s (Const8 <typ.UInt8> [32]))) (Zeromask (ZeroExt8to32 (Rsh8Ux32 <typ.UInt8> s (Const32 <typ.UInt32> [5])))))))
for {
v_0 := v.Args[0]
if v_0.Op != OpInt64Make {
lo := v_0.Args[1]
s := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpRsh32x8, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32x8, typ.UInt32)
v0.AddArg(hi)
v0.AddArg(s)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux8, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux8, typ.UInt32)
v3.AddArg(lo)
v3.AddArg(s)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpLsh32x8, types.UInt32)
+ v4 := b.NewValue0(v.Pos, OpLsh32x8, typ.UInt32)
v4.AddArg(hi)
- v5 := b.NewValue0(v.Pos, OpSub8, types.UInt8)
- v6 := b.NewValue0(v.Pos, OpConst8, types.UInt8)
+ v5 := b.NewValue0(v.Pos, OpSub8, typ.UInt8)
+ v6 := b.NewValue0(v.Pos, OpConst8, typ.UInt8)
v6.AuxInt = 32
v5.AddArg(v6)
v5.AddArg(s)
v4.AddArg(v5)
v2.AddArg(v4)
v1.AddArg(v2)
- v7 := b.NewValue0(v.Pos, OpAnd32, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpRsh32x8, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpAnd32, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpRsh32x8, typ.UInt32)
v8.AddArg(hi)
- v9 := b.NewValue0(v.Pos, OpSub8, types.UInt8)
+ v9 := b.NewValue0(v.Pos, OpSub8, typ.UInt8)
v9.AddArg(s)
- v10 := b.NewValue0(v.Pos, OpConst8, types.UInt8)
+ v10 := b.NewValue0(v.Pos, OpConst8, typ.UInt8)
v10.AuxInt = 32
v9.AddArg(v10)
v8.AddArg(v9)
v7.AddArg(v8)
- v11 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
- v12 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
- v13 := b.NewValue0(v.Pos, OpRsh8Ux32, types.UInt8)
+ v11 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
+ v12 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
+ v13 := b.NewValue0(v.Pos, OpRsh8Ux32, typ.UInt8)
v13.AddArg(s)
- v14 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v14 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v14.AuxInt = 5
v13.AddArg(v14)
v12.AddArg(v13)
func rewriteValuedec64_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 _ (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Const32 [0])
}
// match: (Rsh8Ux64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh8Ux32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh8Ux32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh8Ux32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpRsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8x64 x (Int64Make (Const32 [c]) _))
// cond: c != 0
// result: (Signmask (SignExt8to32 x))
break
}
v.reset(OpSignmask)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh8x64 x (Int64Make hi lo))
// cond: hi.Op != OpConst32
- // result: (Rsh8x32 x (Or32 <types.UInt32> (Zeromask hi) lo))
+ // result: (Rsh8x32 x (Or32 <typ.UInt32> (Zeromask hi) lo))
for {
x := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh8x32)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpOr32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpZeromask, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpOr32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpZeromask, typ.UInt32)
v1.AddArg(hi)
v0.AddArg(v1)
v0.AddArg(lo)
func rewriteValuedec64_OpSignExt16to64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (SignExt16to64 x)
// cond:
// result: (SignExt32to64 (SignExt16to32 x))
for {
x := v.Args[0]
v.reset(OpSignExt32to64)
- v0 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuedec64_OpSignExt32to64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (SignExt32to64 x)
// cond:
// result: (Int64Make (Signmask x) x)
for {
x := v.Args[0]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpSignmask, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignmask, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
v.AddArg(x)
func rewriteValuedec64_OpSignExt8to64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (SignExt8to64 x)
// cond:
// result: (SignExt32to64 (SignExt8to32 x))
for {
x := v.Args[0]
v.reset(OpSignExt32to64)
- v0 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
config := b.Func.Config
_ = config
// match: (Store {t} dst (Int64Make hi lo) mem)
- // cond: t.(Type).Size() == 8 && !config.BigEndian
+ // cond: t.(*types.Type).Size() == 8 && !config.BigEndian
// result: (Store {hi.Type} (OffPtr <hi.Type.PtrTo()> [4] dst) hi (Store {lo.Type} dst lo mem))
for {
t := v.Aux
hi := v_1.Args[0]
lo := v_1.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && !config.BigEndian) {
+ if !(t.(*types.Type).Size() == 8 && !config.BigEndian) {
break
}
v.reset(OpStore)
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(hi)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v1.Aux = lo.Type
v1.AddArg(dst)
v1.AddArg(lo)
return true
}
// match: (Store {t} dst (Int64Make hi lo) mem)
- // cond: t.(Type).Size() == 8 && config.BigEndian
+ // cond: t.(*types.Type).Size() == 8 && config.BigEndian
// result: (Store {lo.Type} (OffPtr <lo.Type.PtrTo()> [4] dst) lo (Store {hi.Type} dst hi mem))
for {
t := v.Aux
hi := v_1.Args[0]
lo := v_1.Args[1]
mem := v.Args[2]
- if !(t.(Type).Size() == 8 && config.BigEndian) {
+ if !(t.(*types.Type).Size() == 8 && config.BigEndian) {
break
}
v.reset(OpStore)
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(lo)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v1.Aux = hi.Type
v1.AddArg(dst)
v1.AddArg(hi)
func rewriteValuedec64_OpSub64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Sub64 x y)
// cond:
- // result: (Int64Make (Sub32withcarry <types.Int32> (Int64Hi x) (Int64Hi y) (Select1 <TypeFlags> (Sub32carry (Int64Lo x) (Int64Lo y)))) (Select0 <types.UInt32> (Sub32carry (Int64Lo x) (Int64Lo y))))
+ // result: (Int64Make (Sub32withcarry <typ.Int32> (Int64Hi x) (Int64Hi y) (Select1 <types.TypeFlags> (Sub32carry (Int64Lo x) (Int64Lo y)))) (Select0 <typ.UInt32> (Sub32carry (Int64Lo x) (Int64Lo y))))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpSub32withcarry, types.Int32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpSub32withcarry, typ.Int32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSelect1, TypeFlags)
- v4 := b.NewValue0(v.Pos, OpSub32carry, MakeTuple(types.UInt32, TypeFlags))
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpSelect1, types.TypeFlags)
+ v4 := b.NewValue0(v.Pos, OpSub32carry, types.NewTuple(typ.UInt32, types.TypeFlags))
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(x)
v4.AddArg(v5)
- v6 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v6 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v6.AddArg(y)
v4.AddArg(v6)
v3.AddArg(v4)
v0.AddArg(v3)
v.AddArg(v0)
- v7 := b.NewValue0(v.Pos, OpSelect0, types.UInt32)
- v8 := b.NewValue0(v.Pos, OpSub32carry, MakeTuple(types.UInt32, TypeFlags))
- v9 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpSelect0, typ.UInt32)
+ v8 := b.NewValue0(v.Pos, OpSub32carry, types.NewTuple(typ.UInt32, types.TypeFlags))
+ v9 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v9.AddArg(x)
v8.AddArg(v9)
- v10 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v10 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v10.AddArg(y)
v8.AddArg(v10)
v7.AddArg(v8)
func rewriteValuedec64_OpXor64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Xor64 x y)
// cond:
- // result: (Int64Make (Xor32 <types.UInt32> (Int64Hi x) (Int64Hi y)) (Xor32 <types.UInt32> (Int64Lo x) (Int64Lo y)))
+ // result: (Int64Make (Xor32 <typ.UInt32> (Int64Hi x) (Int64Hi y)) (Xor32 <typ.UInt32> (Int64Lo x) (Int64Lo y)))
for {
x := v.Args[0]
y := v.Args[1]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpXor32, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpXor32, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v1.AddArg(x)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpInt64Hi, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpInt64Hi, typ.UInt32)
v2.AddArg(y)
v0.AddArg(v2)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpXor32, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpXor32, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpInt64Lo, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpInt64Lo, typ.UInt32)
v5.AddArg(y)
v3.AddArg(v5)
v.AddArg(v3)
func rewriteValuedec64_OpZeroExt16to64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ZeroExt16to64 x)
// cond:
// result: (ZeroExt32to64 (ZeroExt16to32 x))
for {
x := v.Args[0]
v.reset(OpZeroExt32to64)
- v0 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuedec64_OpZeroExt32to64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ZeroExt32to64 x)
// cond:
- // result: (Int64Make (Const32 <types.UInt32> [0]) x)
+ // result: (Int64Make (Const32 <typ.UInt32> [0]) x)
for {
x := v.Args[0]
v.reset(OpInt64Make)
- v0 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v0.AuxInt = 0
v.AddArg(v0)
v.AddArg(x)
func rewriteValuedec64_OpZeroExt8to64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ZeroExt8to64 x)
// cond:
// result: (ZeroExt32to64 (ZeroExt8to32 x))
for {
x := v.Args[0]
v.reset(OpZeroExt32to64)
- v0 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
}
return false
import "math"
import "cmd/internal/obj"
import "cmd/internal/objabi"
+import "cmd/compile/internal/types"
var _ = math.MinInt8 // in case not otherwise used
var _ = obj.ANOP // in case not otherwise used
var _ = objabi.GOROOT // in case not otherwise used
+var _ = types.TypeMem // in case not otherwise used
func rewriteValuegeneric(v *Value) bool {
switch v.Op {
_ = config
fe := b.Func.fe
_ = fe
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Arg {n} [off])
// cond: v.Type.IsString()
- // result: (StringMake (Arg <types.BytePtr> {n} [off]) (Arg <types.Int> {n} [off+config.PtrSize]))
+ // result: (StringMake (Arg <typ.BytePtr> {n} [off]) (Arg <typ.Int> {n} [off+config.PtrSize]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpStringMake)
- v0 := b.NewValue0(v.Pos, OpArg, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.BytePtr)
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.Int)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.Int)
v1.AuxInt = off + config.PtrSize
v1.Aux = n
v.AddArg(v1)
}
// match: (Arg {n} [off])
// cond: v.Type.IsSlice()
- // result: (SliceMake (Arg <v.Type.ElemType().PtrTo()> {n} [off]) (Arg <types.Int> {n} [off+config.PtrSize]) (Arg <types.Int> {n} [off+2*config.PtrSize]))
+ // result: (SliceMake (Arg <v.Type.ElemType().PtrTo()> {n} [off]) (Arg <typ.Int> {n} [off+config.PtrSize]) (Arg <typ.Int> {n} [off+2*config.PtrSize]))
for {
off := v.AuxInt
n := v.Aux
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.Int)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.Int)
v1.AuxInt = off + config.PtrSize
v1.Aux = n
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpArg, types.Int)
+ v2 := b.NewValue0(v.Pos, OpArg, typ.Int)
v2.AuxInt = off + 2*config.PtrSize
v2.Aux = n
v.AddArg(v2)
}
// match: (Arg {n} [off])
// cond: v.Type.IsInterface()
- // result: (IMake (Arg <types.BytePtr> {n} [off]) (Arg <types.BytePtr> {n} [off+config.PtrSize]))
+ // result: (IMake (Arg <typ.BytePtr> {n} [off]) (Arg <typ.BytePtr> {n} [off+config.PtrSize]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpIMake)
- v0 := b.NewValue0(v.Pos, OpArg, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.BytePtr)
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.BytePtr)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.BytePtr)
v1.AuxInt = off + config.PtrSize
v1.Aux = n
v.AddArg(v1)
}
// match: (Arg {n} [off])
// cond: v.Type.IsComplex() && v.Type.Size() == 16
- // result: (ComplexMake (Arg <types.Float64> {n} [off]) (Arg <types.Float64> {n} [off+8]))
+ // result: (ComplexMake (Arg <typ.Float64> {n} [off]) (Arg <typ.Float64> {n} [off+8]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpComplexMake)
- v0 := b.NewValue0(v.Pos, OpArg, types.Float64)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.Float64)
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.Float64)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.Float64)
v1.AuxInt = off + 8
v1.Aux = n
v.AddArg(v1)
}
// match: (Arg {n} [off])
// cond: v.Type.IsComplex() && v.Type.Size() == 8
- // result: (ComplexMake (Arg <types.Float32> {n} [off]) (Arg <types.Float32> {n} [off+4]))
+ // result: (ComplexMake (Arg <typ.Float32> {n} [off]) (Arg <typ.Float32> {n} [off+4]))
for {
off := v.AuxInt
n := v.Aux
break
}
v.reset(OpComplexMake)
- v0 := b.NewValue0(v.Pos, OpArg, types.Float32)
+ v0 := b.NewValue0(v.Pos, OpArg, typ.Float32)
v0.AuxInt = off
v0.Aux = n
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpArg, types.Float32)
+ v1 := b.NewValue0(v.Pos, OpArg, typ.Float32)
v1.AuxInt = off + 4
v1.Aux = n
v.AddArg(v1)
func rewriteValuegeneric_OpConstInterface_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ConstInterface)
// cond:
- // result: (IMake (ConstNil <types.BytePtr>) (ConstNil <types.BytePtr>))
+ // result: (IMake (ConstNil <typ.BytePtr>) (ConstNil <typ.BytePtr>))
for {
v.reset(OpIMake)
- v0 := b.NewValue0(v.Pos, OpConstNil, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpConstNil, typ.BytePtr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConstNil, types.BytePtr)
+ v1 := b.NewValue0(v.Pos, OpConstNil, typ.BytePtr)
v.AddArg(v1)
return true
}
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ConstSlice)
// cond: config.PtrSize == 4
- // result: (SliceMake (ConstNil <v.Type.ElemType().PtrTo()>) (Const32 <types.Int> [0]) (Const32 <types.Int> [0]))
+ // result: (SliceMake (ConstNil <v.Type.ElemType().PtrTo()>) (Const32 <typ.Int> [0]) (Const32 <typ.Int> [0]))
for {
if !(config.PtrSize == 4) {
break
v.reset(OpSliceMake)
v0 := b.NewValue0(v.Pos, OpConstNil, v.Type.ElemType().PtrTo())
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConst32, types.Int)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.Int)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpConst32, types.Int)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.Int)
v2.AuxInt = 0
v.AddArg(v2)
return true
}
// match: (ConstSlice)
// cond: config.PtrSize == 8
- // result: (SliceMake (ConstNil <v.Type.ElemType().PtrTo()>) (Const64 <types.Int> [0]) (Const64 <types.Int> [0]))
+ // result: (SliceMake (ConstNil <v.Type.ElemType().PtrTo()>) (Const64 <typ.Int> [0]) (Const64 <typ.Int> [0]))
for {
if !(config.PtrSize == 8) {
break
v.reset(OpSliceMake)
v0 := b.NewValue0(v.Pos, OpConstNil, v.Type.ElemType().PtrTo())
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConst64, types.Int)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.Int)
v1.AuxInt = 0
v.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpConst64, types.Int)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.Int)
v2.AuxInt = 0
v.AddArg(v2)
return true
_ = config
fe := b.Func.fe
_ = fe
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (ConstString {s})
// cond: config.PtrSize == 4 && s.(string) == ""
- // result: (StringMake (ConstNil) (Const32 <types.Int> [0]))
+ // result: (StringMake (ConstNil) (Const32 <typ.Int> [0]))
for {
s := v.Aux
if !(config.PtrSize == 4 && s.(string) == "") {
break
}
v.reset(OpStringMake)
- v0 := b.NewValue0(v.Pos, OpConstNil, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpConstNil, typ.BytePtr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConst32, types.Int)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.Int)
v1.AuxInt = 0
v.AddArg(v1)
return true
}
// match: (ConstString {s})
// cond: config.PtrSize == 8 && s.(string) == ""
- // result: (StringMake (ConstNil) (Const64 <types.Int> [0]))
+ // result: (StringMake (ConstNil) (Const64 <typ.Int> [0]))
for {
s := v.Aux
if !(config.PtrSize == 8 && s.(string) == "") {
break
}
v.reset(OpStringMake)
- v0 := b.NewValue0(v.Pos, OpConstNil, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpConstNil, typ.BytePtr)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpConst64, types.Int)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.Int)
v1.AuxInt = 0
v.AddArg(v1)
return true
}
// match: (ConstString {s})
// cond: config.PtrSize == 4 && s.(string) != ""
- // result: (StringMake (Addr <types.BytePtr> {fe.StringData(s.(string))} (SB)) (Const32 <types.Int> [int64(len(s.(string)))]))
+ // result: (StringMake (Addr <typ.BytePtr> {fe.StringData(s.(string))} (SB)) (Const32 <typ.Int> [int64(len(s.(string)))]))
for {
s := v.Aux
if !(config.PtrSize == 4 && s.(string) != "") {
break
}
v.reset(OpStringMake)
- v0 := b.NewValue0(v.Pos, OpAddr, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpAddr, typ.BytePtr)
v0.Aux = fe.StringData(s.(string))
- v1 := b.NewValue0(v.Pos, OpSB, types.Uintptr)
+ v1 := b.NewValue0(v.Pos, OpSB, typ.Uintptr)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst32, types.Int)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.Int)
v2.AuxInt = int64(len(s.(string)))
v.AddArg(v2)
return true
}
// match: (ConstString {s})
// cond: config.PtrSize == 8 && s.(string) != ""
- // result: (StringMake (Addr <types.BytePtr> {fe.StringData(s.(string))} (SB)) (Const64 <types.Int> [int64(len(s.(string)))]))
+ // result: (StringMake (Addr <typ.BytePtr> {fe.StringData(s.(string))} (SB)) (Const64 <typ.Int> [int64(len(s.(string)))]))
for {
s := v.Aux
if !(config.PtrSize == 8 && s.(string) != "") {
break
}
v.reset(OpStringMake)
- v0 := b.NewValue0(v.Pos, OpAddr, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpAddr, typ.BytePtr)
v0.Aux = fe.StringData(s.(string))
- v1 := b.NewValue0(v.Pos, OpSB, types.Uintptr)
+ v1 := b.NewValue0(v.Pos, OpSB, typ.Uintptr)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.Int)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.Int)
v2.AuxInt = int64(len(s.(string)))
v.AddArg(v2)
return true
func rewriteValuegeneric_OpDiv16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16 (Const16 [c]) (Const16 [d]))
// cond: d != 0
// result: (Const16 [int64(int16(c)/int16(d))])
}
// match: (Div16 <t> x (Const16 [-1<<15]))
// cond:
- // result: (Rsh16Ux64 (And16 <t> x (Neg16 <t> x)) (Const64 <types.UInt64> [15]))
+ // result: (Rsh16Ux64 (And16 <t> x (Neg16 <t> x)) (Const64 <typ.UInt64> [15]))
for {
t := v.Type
x := v.Args[0]
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 15
v.AddArg(v2)
return true
}
// match: (Div16 <t> n (Const16 [c]))
// cond: isPowerOfTwo(c)
- // result: (Rsh16x64 (Add16 <t> n (Rsh16Ux64 <t> (Rsh16x64 <t> n (Const64 <types.UInt64> [15])) (Const64 <types.UInt64> [16-log2(c)]))) (Const64 <types.UInt64> [log2(c)]))
+ // result: (Rsh16x64 (Add16 <t> n (Rsh16Ux64 <t> (Rsh16x64 <t> n (Const64 <typ.UInt64> [15])) (Const64 <typ.UInt64> [16-log2(c)]))) (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v1 := b.NewValue0(v.Pos, OpRsh16Ux64, t)
v2 := b.NewValue0(v.Pos, OpRsh16x64, t)
v2.AddArg(n)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 15
v2.AddArg(v3)
v1.AddArg(v2)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 16 - log2(c)
v1.AddArg(v4)
v0.AddArg(v1)
v.AddArg(v0)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = log2(c)
v.AddArg(v5)
return true
}
// match: (Div16 <t> x (Const16 [c]))
// cond: smagicOK(16,c)
- // result: (Sub16 <t> (Rsh32x64 <t> (Mul32 <types.UInt32> (Const32 <types.UInt32> [int64(smagic(16,c).m)]) (SignExt16to32 x)) (Const64 <types.UInt64> [16+smagic(16,c).s])) (Rsh32x64 <t> (SignExt16to32 x) (Const64 <types.UInt64> [31])))
+ // result: (Sub16 <t> (Rsh32x64 <t> (Mul32 <typ.UInt32> (Const32 <typ.UInt32> [int64(smagic(16,c).m)]) (SignExt16to32 x)) (Const64 <typ.UInt64> [16+smagic(16,c).s])) (Rsh32x64 <t> (SignExt16to32 x) (Const64 <typ.UInt64> [31])))
for {
t := v.Type
x := v.Args[0]
v.reset(OpSub16)
v.Type = t
v0 := b.NewValue0(v.Pos, OpRsh32x64, t)
- v1 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(smagic(16, c).m)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v3 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 16 + smagic(16, c).s
v0.AddArg(v4)
v.AddArg(v0)
v5 := b.NewValue0(v.Pos, OpRsh32x64, t)
- v6 := b.NewValue0(v.Pos, OpSignExt16to32, types.Int32)
+ v6 := b.NewValue0(v.Pos, OpSignExt16to32, typ.Int32)
v6.AddArg(x)
v5.AddArg(v6)
- v7 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v7 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v7.AuxInt = 31
v5.AddArg(v7)
v.AddArg(v5)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div16u (Const16 [c]) (Const16 [d]))
// cond: d != 0
// result: (Const16 [int64(int16(uint16(c)/uint16(d)))])
}
// match: (Div16u n (Const16 [c]))
// cond: isPowerOfTwo(c&0xffff)
- // result: (Rsh16Ux64 n (Const64 <types.UInt64> [log2(c&0xffff)]))
+ // result: (Rsh16Ux64 n (Const64 <typ.UInt64> [log2(c&0xffff)]))
for {
n := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh16Ux64)
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c & 0xffff)
v.AddArg(v0)
return true
}
// match: (Div16u x (Const16 [c]))
// cond: umagicOK(16, c) && config.RegSize == 8
- // result: (Trunc64to16 (Rsh64Ux64 <types.UInt64> (Mul64 <types.UInt64> (Const64 <types.UInt64> [int64(1<<16+umagic(16,c).m)]) (ZeroExt16to64 x)) (Const64 <types.UInt64> [16+umagic(16,c).s])))
+ // result: (Trunc64to16 (Rsh64Ux64 <typ.UInt64> (Mul64 <typ.UInt64> (Const64 <typ.UInt64> [int64(1<<16+umagic(16,c).m)]) (ZeroExt16to64 x)) (Const64 <typ.UInt64> [16+umagic(16,c).s])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc64to16)
- v0 := b.NewValue0(v.Pos, OpRsh64Ux64, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpMul64, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpRsh64Ux64, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMul64, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = int64(1<<16 + umagic(16, c).m)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to64, typ.UInt64)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 16 + umagic(16, c).s
v0.AddArg(v4)
v.AddArg(v0)
}
// match: (Div16u x (Const16 [c]))
// cond: umagicOK(16, c) && config.RegSize == 4 && umagic(16,c).m&1 == 0
- // result: (Trunc32to16 (Rsh32Ux64 <types.UInt32> (Mul32 <types.UInt32> (Const32 <types.UInt32> [int64(1<<15+umagic(16,c).m/2)]) (ZeroExt16to32 x)) (Const64 <types.UInt64> [16+umagic(16,c).s-1])))
+ // result: (Trunc32to16 (Rsh32Ux64 <typ.UInt32> (Mul32 <typ.UInt32> (Const32 <typ.UInt32> [int64(1<<15+umagic(16,c).m/2)]) (ZeroExt16to32 x)) (Const64 <typ.UInt64> [16+umagic(16,c).s-1])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc32to16)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux64, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux64, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(1<<15 + umagic(16, c).m/2)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 16 + umagic(16, c).s - 1
v0.AddArg(v4)
v.AddArg(v0)
}
// match: (Div16u x (Const16 [c]))
// cond: umagicOK(16, c) && config.RegSize == 4 && c&1 == 0
- // result: (Trunc32to16 (Rsh32Ux64 <types.UInt32> (Mul32 <types.UInt32> (Const32 <types.UInt32> [int64(1<<15+(umagic(16,c).m+1)/2)]) (Rsh32Ux64 <types.UInt32> (ZeroExt16to32 x) (Const64 <types.UInt64> [1]))) (Const64 <types.UInt64> [16+umagic(16,c).s-2])))
+ // result: (Trunc32to16 (Rsh32Ux64 <typ.UInt32> (Mul32 <typ.UInt32> (Const32 <typ.UInt32> [int64(1<<15+(umagic(16,c).m+1)/2)]) (Rsh32Ux64 <typ.UInt32> (ZeroExt16to32 x) (Const64 <typ.UInt64> [1]))) (Const64 <typ.UInt64> [16+umagic(16,c).s-2])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc32to16)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux64, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux64, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(1<<15 + (umagic(16, c).m+1)/2)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpRsh32Ux64, types.UInt32)
- v4 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpRsh32Ux64, typ.UInt32)
+ v4 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 1
v3.AddArg(v5)
v1.AddArg(v3)
v0.AddArg(v1)
- v6 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v6.AuxInt = 16 + umagic(16, c).s - 2
v0.AddArg(v6)
v.AddArg(v0)
}
// match: (Div16u x (Const16 [c]))
// cond: umagicOK(16, c) && config.RegSize == 4
- // result: (Trunc32to16 (Rsh32Ux64 <types.UInt32> (Avg32u (Lsh32x64 <types.UInt32> (ZeroExt16to32 x) (Const64 <types.UInt64> [16])) (Mul32 <types.UInt32> (Const32 <types.UInt32> [int64(umagic(16,c).m)]) (ZeroExt16to32 x))) (Const64 <types.UInt64> [16+umagic(16,c).s-1])))
+ // result: (Trunc32to16 (Rsh32Ux64 <typ.UInt32> (Avg32u (Lsh32x64 <typ.UInt32> (ZeroExt16to32 x) (Const64 <typ.UInt64> [16])) (Mul32 <typ.UInt32> (Const32 <typ.UInt32> [int64(umagic(16,c).m)]) (ZeroExt16to32 x))) (Const64 <typ.UInt64> [16+umagic(16,c).s-1])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc32to16)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux64, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpAvg32u, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpLsh32x64, types.UInt32)
- v3 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux64, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpAvg32u, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpLsh32x64, typ.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 16
v2.AddArg(v4)
v1.AddArg(v2)
- v5 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v6 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v6 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v6.AuxInt = int64(umagic(16, c).m)
v5.AddArg(v6)
- v7 := b.NewValue0(v.Pos, OpZeroExt16to32, types.UInt32)
+ v7 := b.NewValue0(v.Pos, OpZeroExt16to32, typ.UInt32)
v7.AddArg(x)
v5.AddArg(v7)
v1.AddArg(v5)
v0.AddArg(v1)
- v8 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v8 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v8.AuxInt = 16 + umagic(16, c).s - 1
v0.AddArg(v8)
v.AddArg(v0)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32 (Const32 [c]) (Const32 [d]))
// cond: d != 0
// result: (Const32 [int64(int32(c)/int32(d))])
}
// match: (Div32 <t> x (Const32 [-1<<31]))
// cond:
- // result: (Rsh32Ux64 (And32 <t> x (Neg32 <t> x)) (Const64 <types.UInt64> [31]))
+ // result: (Rsh32Ux64 (And32 <t> x (Neg32 <t> x)) (Const64 <typ.UInt64> [31]))
for {
t := v.Type
x := v.Args[0]
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 31
v.AddArg(v2)
return true
}
// match: (Div32 <t> n (Const32 [c]))
// cond: isPowerOfTwo(c)
- // result: (Rsh32x64 (Add32 <t> n (Rsh32Ux64 <t> (Rsh32x64 <t> n (Const64 <types.UInt64> [31])) (Const64 <types.UInt64> [32-log2(c)]))) (Const64 <types.UInt64> [log2(c)]))
+ // result: (Rsh32x64 (Add32 <t> n (Rsh32Ux64 <t> (Rsh32x64 <t> n (Const64 <typ.UInt64> [31])) (Const64 <typ.UInt64> [32-log2(c)]))) (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v1 := b.NewValue0(v.Pos, OpRsh32Ux64, t)
v2 := b.NewValue0(v.Pos, OpRsh32x64, t)
v2.AddArg(n)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 31
v2.AddArg(v3)
v1.AddArg(v2)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 32 - log2(c)
v1.AddArg(v4)
v0.AddArg(v1)
v.AddArg(v0)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = log2(c)
v.AddArg(v5)
return true
}
// match: (Div32 <t> x (Const32 [c]))
// cond: smagicOK(32,c) && config.RegSize == 8
- // result: (Sub32 <t> (Rsh64x64 <t> (Mul64 <types.UInt64> (Const64 <types.UInt64> [int64(smagic(32,c).m)]) (SignExt32to64 x)) (Const64 <types.UInt64> [32+smagic(32,c).s])) (Rsh64x64 <t> (SignExt32to64 x) (Const64 <types.UInt64> [63])))
+ // result: (Sub32 <t> (Rsh64x64 <t> (Mul64 <typ.UInt64> (Const64 <typ.UInt64> [int64(smagic(32,c).m)]) (SignExt32to64 x)) (Const64 <typ.UInt64> [32+smagic(32,c).s])) (Rsh64x64 <t> (SignExt32to64 x) (Const64 <typ.UInt64> [63])))
for {
t := v.Type
x := v.Args[0]
v.reset(OpSub32)
v.Type = t
v0 := b.NewValue0(v.Pos, OpRsh64x64, t)
- v1 := b.NewValue0(v.Pos, OpMul64, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMul64, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = int64(smagic(32, c).m)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v3 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 32 + smagic(32, c).s
v0.AddArg(v4)
v.AddArg(v0)
v5 := b.NewValue0(v.Pos, OpRsh64x64, t)
- v6 := b.NewValue0(v.Pos, OpSignExt32to64, types.Int64)
+ v6 := b.NewValue0(v.Pos, OpSignExt32to64, typ.Int64)
v6.AddArg(x)
v5.AddArg(v6)
- v7 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v7 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v7.AuxInt = 63
v5.AddArg(v7)
v.AddArg(v5)
}
// match: (Div32 <t> x (Const32 [c]))
// cond: smagicOK(32,c) && config.RegSize == 4 && smagic(32,c).m&1 == 0
- // result: (Sub32 <t> (Rsh32x64 <t> (Hmul32 <t> (Const32 <types.UInt32> [int64(int32(smagic(32,c).m/2))]) x) (Const64 <types.UInt64> [smagic(32,c).s-1])) (Rsh32x64 <t> x (Const64 <types.UInt64> [31])))
+ // result: (Sub32 <t> (Rsh32x64 <t> (Hmul32 <t> (Const32 <typ.UInt32> [int64(int32(smagic(32,c).m/2))]) x) (Const64 <typ.UInt64> [smagic(32,c).s-1])) (Rsh32x64 <t> x (Const64 <typ.UInt64> [31])))
for {
t := v.Type
x := v.Args[0]
v.Type = t
v0 := b.NewValue0(v.Pos, OpRsh32x64, t)
v1 := b.NewValue0(v.Pos, OpHmul32, t)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(int32(smagic(32, c).m / 2))
v1.AddArg(v2)
v1.AddArg(x)
v0.AddArg(v1)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = smagic(32, c).s - 1
v0.AddArg(v3)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpRsh32x64, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 31
v4.AddArg(v5)
v.AddArg(v4)
}
// match: (Div32 <t> x (Const32 [c]))
// cond: smagicOK(32,c) && config.RegSize == 4 && smagic(32,c).m&1 != 0
- // result: (Sub32 <t> (Rsh32x64 <t> (Add32 <t> (Hmul32 <t> (Const32 <types.UInt32> [int64(int32(smagic(32,c).m))]) x) x) (Const64 <types.UInt64> [smagic(32,c).s])) (Rsh32x64 <t> x (Const64 <types.UInt64> [31])))
+ // result: (Sub32 <t> (Rsh32x64 <t> (Add32 <t> (Hmul32 <t> (Const32 <typ.UInt32> [int64(int32(smagic(32,c).m))]) x) x) (Const64 <typ.UInt64> [smagic(32,c).s])) (Rsh32x64 <t> x (Const64 <typ.UInt64> [31])))
for {
t := v.Type
x := v.Args[0]
v0 := b.NewValue0(v.Pos, OpRsh32x64, t)
v1 := b.NewValue0(v.Pos, OpAdd32, t)
v2 := b.NewValue0(v.Pos, OpHmul32, t)
- v3 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v3.AuxInt = int64(int32(smagic(32, c).m))
v2.AddArg(v3)
v2.AddArg(x)
v1.AddArg(v2)
v1.AddArg(x)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = smagic(32, c).s
v0.AddArg(v4)
v.AddArg(v0)
v5 := b.NewValue0(v.Pos, OpRsh32x64, t)
v5.AddArg(x)
- v6 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v6.AuxInt = 31
v5.AddArg(v6)
v.AddArg(v5)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div32u (Const32 [c]) (Const32 [d]))
// cond: d != 0
// result: (Const32 [int64(int32(uint32(c)/uint32(d)))])
}
// match: (Div32u n (Const32 [c]))
// cond: isPowerOfTwo(c&0xffffffff)
- // result: (Rsh32Ux64 n (Const64 <types.UInt64> [log2(c&0xffffffff)]))
+ // result: (Rsh32Ux64 n (Const64 <typ.UInt64> [log2(c&0xffffffff)]))
for {
n := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh32Ux64)
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c & 0xffffffff)
v.AddArg(v0)
return true
}
// match: (Div32u x (Const32 [c]))
// cond: umagicOK(32, c) && config.RegSize == 4 && umagic(32,c).m&1 == 0
- // result: (Rsh32Ux64 <types.UInt32> (Hmul32u <types.UInt32> (Const32 <types.UInt32> [int64(int32(1<<31+umagic(32,c).m/2))]) x) (Const64 <types.UInt64> [umagic(32,c).s-1]))
+ // result: (Rsh32Ux64 <typ.UInt32> (Hmul32u <typ.UInt32> (Const32 <typ.UInt32> [int64(int32(1<<31+umagic(32,c).m/2))]) x) (Const64 <typ.UInt64> [umagic(32,c).s-1]))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpRsh32Ux64)
- v.Type = types.UInt32
- v0 := b.NewValue0(v.Pos, OpHmul32u, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v.Type = typ.UInt32
+ v0 := b.NewValue0(v.Pos, OpHmul32u, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v1.AuxInt = int64(int32(1<<31 + umagic(32, c).m/2))
v0.AddArg(v1)
v0.AddArg(x)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = umagic(32, c).s - 1
v.AddArg(v2)
return true
}
// match: (Div32u x (Const32 [c]))
// cond: umagicOK(32, c) && config.RegSize == 4 && c&1 == 0
- // result: (Rsh32Ux64 <types.UInt32> (Hmul32u <types.UInt32> (Const32 <types.UInt32> [int64(int32(1<<31+(umagic(32,c).m+1)/2))]) (Rsh32Ux64 <types.UInt32> x (Const64 <types.UInt64> [1]))) (Const64 <types.UInt64> [umagic(32,c).s-2]))
+ // result: (Rsh32Ux64 <typ.UInt32> (Hmul32u <typ.UInt32> (Const32 <typ.UInt32> [int64(int32(1<<31+(umagic(32,c).m+1)/2))]) (Rsh32Ux64 <typ.UInt32> x (Const64 <typ.UInt64> [1]))) (Const64 <typ.UInt64> [umagic(32,c).s-2]))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpRsh32Ux64)
- v.Type = types.UInt32
- v0 := b.NewValue0(v.Pos, OpHmul32u, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v.Type = typ.UInt32
+ v0 := b.NewValue0(v.Pos, OpHmul32u, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v1.AuxInt = int64(int32(1<<31 + (umagic(32, c).m+1)/2))
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpRsh32Ux64, types.UInt32)
+ v2 := b.NewValue0(v.Pos, OpRsh32Ux64, typ.UInt32)
v2.AddArg(x)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 1
v2.AddArg(v3)
v0.AddArg(v2)
v.AddArg(v0)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = umagic(32, c).s - 2
v.AddArg(v4)
return true
}
// match: (Div32u x (Const32 [c]))
// cond: umagicOK(32, c) && config.RegSize == 4
- // result: (Rsh32Ux64 <types.UInt32> (Avg32u x (Hmul32u <types.UInt32> (Const32 <types.UInt32> [int64(int32(umagic(32,c).m))]) x)) (Const64 <types.UInt64> [umagic(32,c).s-1]))
+ // result: (Rsh32Ux64 <typ.UInt32> (Avg32u x (Hmul32u <typ.UInt32> (Const32 <typ.UInt32> [int64(int32(umagic(32,c).m))]) x)) (Const64 <typ.UInt64> [umagic(32,c).s-1]))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpRsh32Ux64)
- v.Type = types.UInt32
- v0 := b.NewValue0(v.Pos, OpAvg32u, types.UInt32)
+ v.Type = typ.UInt32
+ v0 := b.NewValue0(v.Pos, OpAvg32u, typ.UInt32)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpHmul32u, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpHmul32u, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(int32(umagic(32, c).m))
v1.AddArg(v2)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = umagic(32, c).s - 1
v.AddArg(v3)
return true
}
// match: (Div32u x (Const32 [c]))
// cond: umagicOK(32, c) && config.RegSize == 8 && umagic(32,c).m&1 == 0
- // result: (Trunc64to32 (Rsh64Ux64 <types.UInt64> (Mul64 <types.UInt64> (Const64 <types.UInt64> [int64(1<<31+umagic(32,c).m/2)]) (ZeroExt32to64 x)) (Const64 <types.UInt64> [32+umagic(32,c).s-1])))
+ // result: (Trunc64to32 (Rsh64Ux64 <typ.UInt64> (Mul64 <typ.UInt64> (Const64 <typ.UInt64> [int64(1<<31+umagic(32,c).m/2)]) (ZeroExt32to64 x)) (Const64 <typ.UInt64> [32+umagic(32,c).s-1])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc64to32)
- v0 := b.NewValue0(v.Pos, OpRsh64Ux64, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpMul64, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpRsh64Ux64, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMul64, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = int64(1<<31 + umagic(32, c).m/2)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 32 + umagic(32, c).s - 1
v0.AddArg(v4)
v.AddArg(v0)
}
// match: (Div32u x (Const32 [c]))
// cond: umagicOK(32, c) && config.RegSize == 8 && c&1 == 0
- // result: (Trunc64to32 (Rsh64Ux64 <types.UInt64> (Mul64 <types.UInt64> (Const64 <types.UInt64> [int64(1<<31+(umagic(32,c).m+1)/2)]) (Rsh64Ux64 <types.UInt64> (ZeroExt32to64 x) (Const64 <types.UInt64> [1]))) (Const64 <types.UInt64> [32+umagic(32,c).s-2])))
+ // result: (Trunc64to32 (Rsh64Ux64 <typ.UInt64> (Mul64 <typ.UInt64> (Const64 <typ.UInt64> [int64(1<<31+(umagic(32,c).m+1)/2)]) (Rsh64Ux64 <typ.UInt64> (ZeroExt32to64 x) (Const64 <typ.UInt64> [1]))) (Const64 <typ.UInt64> [32+umagic(32,c).s-2])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc64to32)
- v0 := b.NewValue0(v.Pos, OpRsh64Ux64, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpMul64, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpRsh64Ux64, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpMul64, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = int64(1<<31 + (umagic(32, c).m+1)/2)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpRsh64Ux64, types.UInt64)
- v4 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpRsh64Ux64, typ.UInt64)
+ v4 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v4.AddArg(x)
v3.AddArg(v4)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 1
v3.AddArg(v5)
v1.AddArg(v3)
v0.AddArg(v1)
- v6 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v6.AuxInt = 32 + umagic(32, c).s - 2
v0.AddArg(v6)
v.AddArg(v0)
}
// match: (Div32u x (Const32 [c]))
// cond: umagicOK(32, c) && config.RegSize == 8
- // result: (Trunc64to32 (Rsh64Ux64 <types.UInt64> (Avg64u (Lsh64x64 <types.UInt64> (ZeroExt32to64 x) (Const64 <types.UInt64> [32])) (Mul64 <types.UInt64> (Const64 <types.UInt32> [int64(umagic(32,c).m)]) (ZeroExt32to64 x))) (Const64 <types.UInt64> [32+umagic(32,c).s-1])))
+ // result: (Trunc64to32 (Rsh64Ux64 <typ.UInt64> (Avg64u (Lsh64x64 <typ.UInt64> (ZeroExt32to64 x) (Const64 <typ.UInt64> [32])) (Mul64 <typ.UInt64> (Const64 <typ.UInt32> [int64(umagic(32,c).m)]) (ZeroExt32to64 x))) (Const64 <typ.UInt64> [32+umagic(32,c).s-1])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc64to32)
- v0 := b.NewValue0(v.Pos, OpRsh64Ux64, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpAvg64u, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpLsh64x64, types.UInt64)
- v3 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpRsh64Ux64, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpAvg64u, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpLsh64x64, typ.UInt64)
+ v3 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v3.AddArg(x)
v2.AddArg(v3)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 32
v2.AddArg(v4)
v1.AddArg(v2)
- v5 := b.NewValue0(v.Pos, OpMul64, types.UInt64)
- v6 := b.NewValue0(v.Pos, OpConst64, types.UInt32)
+ v5 := b.NewValue0(v.Pos, OpMul64, typ.UInt64)
+ v6 := b.NewValue0(v.Pos, OpConst64, typ.UInt32)
v6.AuxInt = int64(umagic(32, c).m)
v5.AddArg(v6)
- v7 := b.NewValue0(v.Pos, OpZeroExt32to64, types.UInt64)
+ v7 := b.NewValue0(v.Pos, OpZeroExt32to64, typ.UInt64)
v7.AddArg(x)
v5.AddArg(v7)
v1.AddArg(v5)
v0.AddArg(v1)
- v8 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v8 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v8.AuxInt = 32 + umagic(32, c).s - 1
v0.AddArg(v8)
v.AddArg(v0)
func rewriteValuegeneric_OpDiv64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div64 (Const64 [c]) (Const64 [d]))
// cond: d != 0
// result: (Const64 [c/d])
}
// match: (Div64 <t> x (Const64 [-1<<63]))
// cond:
- // result: (Rsh64Ux64 (And64 <t> x (Neg64 <t> x)) (Const64 <types.UInt64> [63]))
+ // result: (Rsh64Ux64 (And64 <t> x (Neg64 <t> x)) (Const64 <typ.UInt64> [63]))
for {
t := v.Type
x := v.Args[0]
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 63
v.AddArg(v2)
return true
}
// match: (Div64 <t> n (Const64 [c]))
// cond: isPowerOfTwo(c)
- // result: (Rsh64x64 (Add64 <t> n (Rsh64Ux64 <t> (Rsh64x64 <t> n (Const64 <types.UInt64> [63])) (Const64 <types.UInt64> [64-log2(c)]))) (Const64 <types.UInt64> [log2(c)]))
+ // result: (Rsh64x64 (Add64 <t> n (Rsh64Ux64 <t> (Rsh64x64 <t> n (Const64 <typ.UInt64> [63])) (Const64 <typ.UInt64> [64-log2(c)]))) (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v1 := b.NewValue0(v.Pos, OpRsh64Ux64, t)
v2 := b.NewValue0(v.Pos, OpRsh64x64, t)
v2.AddArg(n)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 63
v2.AddArg(v3)
v1.AddArg(v2)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 64 - log2(c)
v1.AddArg(v4)
v0.AddArg(v1)
v.AddArg(v0)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = log2(c)
v.AddArg(v5)
return true
}
// match: (Div64 <t> x (Const64 [c]))
// cond: smagicOK(64,c) && smagic(64,c).m&1 == 0
- // result: (Sub64 <t> (Rsh64x64 <t> (Hmul64 <t> (Const64 <types.UInt64> [int64(smagic(64,c).m/2)]) x) (Const64 <types.UInt64> [smagic(64,c).s-1])) (Rsh64x64 <t> x (Const64 <types.UInt64> [63])))
+ // result: (Sub64 <t> (Rsh64x64 <t> (Hmul64 <t> (Const64 <typ.UInt64> [int64(smagic(64,c).m/2)]) x) (Const64 <typ.UInt64> [smagic(64,c).s-1])) (Rsh64x64 <t> x (Const64 <typ.UInt64> [63])))
for {
t := v.Type
x := v.Args[0]
v.Type = t
v0 := b.NewValue0(v.Pos, OpRsh64x64, t)
v1 := b.NewValue0(v.Pos, OpHmul64, t)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = int64(smagic(64, c).m / 2)
v1.AddArg(v2)
v1.AddArg(x)
v0.AddArg(v1)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = smagic(64, c).s - 1
v0.AddArg(v3)
v.AddArg(v0)
v4 := b.NewValue0(v.Pos, OpRsh64x64, t)
v4.AddArg(x)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = 63
v4.AddArg(v5)
v.AddArg(v4)
}
// match: (Div64 <t> x (Const64 [c]))
// cond: smagicOK(64,c) && smagic(64,c).m&1 != 0
- // result: (Sub64 <t> (Rsh64x64 <t> (Add64 <t> (Hmul64 <t> (Const64 <types.UInt64> [int64(smagic(64,c).m)]) x) x) (Const64 <types.UInt64> [smagic(64,c).s])) (Rsh64x64 <t> x (Const64 <types.UInt64> [63])))
+ // result: (Sub64 <t> (Rsh64x64 <t> (Add64 <t> (Hmul64 <t> (Const64 <typ.UInt64> [int64(smagic(64,c).m)]) x) x) (Const64 <typ.UInt64> [smagic(64,c).s])) (Rsh64x64 <t> x (Const64 <typ.UInt64> [63])))
for {
t := v.Type
x := v.Args[0]
v0 := b.NewValue0(v.Pos, OpRsh64x64, t)
v1 := b.NewValue0(v.Pos, OpAdd64, t)
v2 := b.NewValue0(v.Pos, OpHmul64, t)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = int64(smagic(64, c).m)
v2.AddArg(v3)
v2.AddArg(x)
v1.AddArg(v2)
v1.AddArg(x)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = smagic(64, c).s
v0.AddArg(v4)
v.AddArg(v0)
v5 := b.NewValue0(v.Pos, OpRsh64x64, t)
v5.AddArg(x)
- v6 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v6 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v6.AuxInt = 63
v5.AddArg(v6)
v.AddArg(v5)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div64u (Const64 [c]) (Const64 [d]))
// cond: d != 0
// result: (Const64 [int64(uint64(c)/uint64(d))])
}
// match: (Div64u n (Const64 [c]))
// cond: isPowerOfTwo(c)
- // result: (Rsh64Ux64 n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Rsh64Ux64 n (Const64 <typ.UInt64> [log2(c)]))
for {
n := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh64Ux64)
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Div64u x (Const64 [c]))
// cond: umagicOK(64, c) && config.RegSize == 8 && umagic(64,c).m&1 == 0
- // result: (Rsh64Ux64 <types.UInt64> (Hmul64u <types.UInt64> (Const64 <types.UInt64> [int64(1<<63+umagic(64,c).m/2)]) x) (Const64 <types.UInt64> [umagic(64,c).s-1]))
+ // result: (Rsh64Ux64 <typ.UInt64> (Hmul64u <typ.UInt64> (Const64 <typ.UInt64> [int64(1<<63+umagic(64,c).m/2)]) x) (Const64 <typ.UInt64> [umagic(64,c).s-1]))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpRsh64Ux64)
- v.Type = types.UInt64
- v0 := b.NewValue0(v.Pos, OpHmul64u, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpHmul64u, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = int64(1<<63 + umagic(64, c).m/2)
v0.AddArg(v1)
v0.AddArg(x)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = umagic(64, c).s - 1
v.AddArg(v2)
return true
}
// match: (Div64u x (Const64 [c]))
// cond: umagicOK(64, c) && config.RegSize == 8 && c&1 == 0
- // result: (Rsh64Ux64 <types.UInt64> (Hmul64u <types.UInt64> (Const64 <types.UInt64> [int64(1<<63+(umagic(64,c).m+1)/2)]) (Rsh64Ux64 <types.UInt64> x (Const64 <types.UInt64> [1]))) (Const64 <types.UInt64> [umagic(64,c).s-2]))
+ // result: (Rsh64Ux64 <typ.UInt64> (Hmul64u <typ.UInt64> (Const64 <typ.UInt64> [int64(1<<63+(umagic(64,c).m+1)/2)]) (Rsh64Ux64 <typ.UInt64> x (Const64 <typ.UInt64> [1]))) (Const64 <typ.UInt64> [umagic(64,c).s-2]))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpRsh64Ux64)
- v.Type = types.UInt64
- v0 := b.NewValue0(v.Pos, OpHmul64u, types.UInt64)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpHmul64u, typ.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = int64(1<<63 + (umagic(64, c).m+1)/2)
v0.AddArg(v1)
- v2 := b.NewValue0(v.Pos, OpRsh64Ux64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpRsh64Ux64, typ.UInt64)
v2.AddArg(x)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 1
v2.AddArg(v3)
v0.AddArg(v2)
v.AddArg(v0)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = umagic(64, c).s - 2
v.AddArg(v4)
return true
}
// match: (Div64u x (Const64 [c]))
// cond: umagicOK(64, c) && config.RegSize == 8
- // result: (Rsh64Ux64 <types.UInt64> (Avg64u x (Hmul64u <types.UInt64> (Const64 <types.UInt64> [int64(umagic(64,c).m)]) x)) (Const64 <types.UInt64> [umagic(64,c).s-1]))
+ // result: (Rsh64Ux64 <typ.UInt64> (Avg64u x (Hmul64u <typ.UInt64> (Const64 <typ.UInt64> [int64(umagic(64,c).m)]) x)) (Const64 <typ.UInt64> [umagic(64,c).s-1]))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpRsh64Ux64)
- v.Type = types.UInt64
- v0 := b.NewValue0(v.Pos, OpAvg64u, types.UInt64)
+ v.Type = typ.UInt64
+ v0 := b.NewValue0(v.Pos, OpAvg64u, typ.UInt64)
v0.AddArg(x)
- v1 := b.NewValue0(v.Pos, OpHmul64u, types.UInt64)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpHmul64u, typ.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = int64(umagic(64, c).m)
v1.AddArg(v2)
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = umagic(64, c).s - 1
v.AddArg(v3)
return true
func rewriteValuegeneric_OpDiv8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8 (Const8 [c]) (Const8 [d]))
// cond: d != 0
// result: (Const8 [int64(int8(c)/int8(d))])
}
// match: (Div8 <t> x (Const8 [-1<<7 ]))
// cond:
- // result: (Rsh8Ux64 (And8 <t> x (Neg8 <t> x)) (Const64 <types.UInt64> [7 ]))
+ // result: (Rsh8Ux64 (And8 <t> x (Neg8 <t> x)) (Const64 <typ.UInt64> [7 ]))
for {
t := v.Type
x := v.Args[0]
v1.AddArg(x)
v0.AddArg(v1)
v.AddArg(v0)
- v2 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v2 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v2.AuxInt = 7
v.AddArg(v2)
return true
}
// match: (Div8 <t> n (Const8 [c]))
// cond: isPowerOfTwo(c)
- // result: (Rsh8x64 (Add8 <t> n (Rsh8Ux64 <t> (Rsh8x64 <t> n (Const64 <types.UInt64> [ 7])) (Const64 <types.UInt64> [ 8-log2(c)]))) (Const64 <types.UInt64> [log2(c)]))
+ // result: (Rsh8x64 (Add8 <t> n (Rsh8Ux64 <t> (Rsh8x64 <t> n (Const64 <typ.UInt64> [ 7])) (Const64 <typ.UInt64> [ 8-log2(c)]))) (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v1 := b.NewValue0(v.Pos, OpRsh8Ux64, t)
v2 := b.NewValue0(v.Pos, OpRsh8x64, t)
v2.AddArg(n)
- v3 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v3 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v3.AuxInt = 7
v2.AddArg(v3)
v1.AddArg(v2)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 8 - log2(c)
v1.AddArg(v4)
v0.AddArg(v1)
v.AddArg(v0)
- v5 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v5 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v5.AuxInt = log2(c)
v.AddArg(v5)
return true
}
// match: (Div8 <t> x (Const8 [c]))
// cond: smagicOK(8,c)
- // result: (Sub8 <t> (Rsh32x64 <t> (Mul32 <types.UInt32> (Const32 <types.UInt32> [int64(smagic(8,c).m)]) (SignExt8to32 x)) (Const64 <types.UInt64> [8+smagic(8,c).s])) (Rsh32x64 <t> (SignExt8to32 x) (Const64 <types.UInt64> [31])))
+ // result: (Sub8 <t> (Rsh32x64 <t> (Mul32 <typ.UInt32> (Const32 <typ.UInt32> [int64(smagic(8,c).m)]) (SignExt8to32 x)) (Const64 <typ.UInt64> [8+smagic(8,c).s])) (Rsh32x64 <t> (SignExt8to32 x) (Const64 <typ.UInt64> [31])))
for {
t := v.Type
x := v.Args[0]
v.reset(OpSub8)
v.Type = t
v0 := b.NewValue0(v.Pos, OpRsh32x64, t)
- v1 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(smagic(8, c).m)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v3 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 8 + smagic(8, c).s
v0.AddArg(v4)
v.AddArg(v0)
v5 := b.NewValue0(v.Pos, OpRsh32x64, t)
- v6 := b.NewValue0(v.Pos, OpSignExt8to32, types.Int32)
+ v6 := b.NewValue0(v.Pos, OpSignExt8to32, typ.Int32)
v6.AddArg(x)
v5.AddArg(v6)
- v7 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v7 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v7.AuxInt = 31
v5.AddArg(v7)
v.AddArg(v5)
func rewriteValuegeneric_OpDiv8u_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Div8u (Const8 [c]) (Const8 [d]))
// cond: d != 0
// result: (Const8 [int64(int8(uint8(c)/uint8(d)))])
}
// match: (Div8u n (Const8 [c]))
// cond: isPowerOfTwo(c&0xff)
- // result: (Rsh8Ux64 n (Const64 <types.UInt64> [log2(c&0xff)]))
+ // result: (Rsh8Ux64 n (Const64 <typ.UInt64> [log2(c&0xff)]))
for {
n := v.Args[0]
v_1 := v.Args[1]
}
v.reset(OpRsh8Ux64)
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c & 0xff)
v.AddArg(v0)
return true
}
// match: (Div8u x (Const8 [c]))
// cond: umagicOK(8, c)
- // result: (Trunc32to8 (Rsh32Ux64 <types.UInt32> (Mul32 <types.UInt32> (Const32 <types.UInt32> [int64(1<<8+umagic(8,c).m)]) (ZeroExt8to32 x)) (Const64 <types.UInt64> [8+umagic(8,c).s])))
+ // result: (Trunc32to8 (Rsh32Ux64 <typ.UInt32> (Mul32 <typ.UInt32> (Const32 <typ.UInt32> [int64(1<<8+umagic(8,c).m)]) (ZeroExt8to32 x)) (Const64 <typ.UInt64> [8+umagic(8,c).s])))
for {
x := v.Args[0]
v_1 := v.Args[1]
break
}
v.reset(OpTrunc32to8)
- v0 := b.NewValue0(v.Pos, OpRsh32Ux64, types.UInt32)
- v1 := b.NewValue0(v.Pos, OpMul32, types.UInt32)
- v2 := b.NewValue0(v.Pos, OpConst32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpRsh32Ux64, typ.UInt32)
+ v1 := b.NewValue0(v.Pos, OpMul32, typ.UInt32)
+ v2 := b.NewValue0(v.Pos, OpConst32, typ.UInt32)
v2.AuxInt = int64(1<<8 + umagic(8, c).m)
v1.AddArg(v2)
- v3 := b.NewValue0(v.Pos, OpZeroExt8to32, types.UInt32)
+ v3 := b.NewValue0(v.Pos, OpZeroExt8to32, typ.UInt32)
v3.AddArg(x)
v1.AddArg(v3)
v0.AddArg(v1)
- v4 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v4 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v4.AuxInt = 8 + umagic(8, c).s
v0.AddArg(v4)
v.AddArg(v0)
func rewriteValuegeneric_OpEqInter_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqInter x y)
// cond:
// result: (EqPtr (ITab x) (ITab y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpEqPtr)
- v0 := b.NewValue0(v.Pos, OpITab, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpITab, typ.BytePtr)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpITab, types.BytePtr)
+ v1 := b.NewValue0(v.Pos, OpITab, typ.BytePtr)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuegeneric_OpEqPtr_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqPtr p (ConstNil))
// cond:
// result: (Not (IsNonNil p))
break
}
v.reset(OpNot)
- v0 := b.NewValue0(v.Pos, OpIsNonNil, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpIsNonNil, typ.Bool)
v0.AddArg(p)
v.AddArg(v0)
return true
}
p := v.Args[1]
v.reset(OpNot)
- v0 := b.NewValue0(v.Pos, OpIsNonNil, types.Bool)
+ v0 := b.NewValue0(v.Pos, OpIsNonNil, typ.Bool)
v0.AddArg(p)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpEqSlice_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (EqSlice x y)
// cond:
// result: (EqPtr (SlicePtr x) (SlicePtr y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpEqPtr)
- v0 := b.NewValue0(v.Pos, OpSlicePtr, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpSlicePtr, typ.BytePtr)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSlicePtr, types.BytePtr)
+ v1 := b.NewValue0(v.Pos, OpSlicePtr, typ.BytePtr)
v1.AddArg(y)
v.AddArg(v1)
return true
fe := b.Func.fe
_ = fe
// match: (Load <t1> p1 (Store {t2} p2 x _))
- // cond: isSamePtr(p1,p2) && t1.Compare(x.Type)==CMPeq && t1.Size() == t2.(Type).Size()
+ // cond: isSamePtr(p1,p2) && t1.Compare(x.Type) == types.CMPeq && t1.Size() == t2.(*types.Type).Size()
// result: x
for {
t1 := v.Type
t2 := v_1.Aux
p2 := v_1.Args[0]
x := v_1.Args[1]
- if !(isSamePtr(p1, p2) && t1.Compare(x.Type) == CMPeq && t1.Size() == t2.(Type).Size()) {
+ if !(isSamePtr(p1, p2) && t1.Compare(x.Type) == types.CMPeq && t1.Size() == t2.(*types.Type).Size()) {
break
}
v.reset(OpCopy)
func rewriteValuegeneric_OpLsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh16x64 (Const16 [c]) (Const64 [d]))
// cond:
// result: (Const16 [int64(int16(c) << uint64(d))])
}
// match: (Lsh16x64 (Rsh16Ux64 (Lsh16x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Lsh16x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Lsh16x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpRsh16Ux64 {
}
v.reset(OpLsh16x64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
func rewriteValuegeneric_OpLsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh32x64 (Const32 [c]) (Const64 [d]))
// cond:
// result: (Const32 [int64(int32(c) << uint64(d))])
}
// match: (Lsh32x64 (Rsh32Ux64 (Lsh32x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Lsh32x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Lsh32x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpRsh32Ux64 {
}
v.reset(OpLsh32x64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
func rewriteValuegeneric_OpLsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh64x64 (Const64 [c]) (Const64 [d]))
// cond:
// result: (Const64 [c << uint64(d)])
}
// match: (Lsh64x64 (Rsh64Ux64 (Lsh64x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Lsh64x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Lsh64x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpRsh64Ux64 {
}
v.reset(OpLsh64x64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
func rewriteValuegeneric_OpLsh8x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Lsh8x64 (Const8 [c]) (Const64 [d]))
// cond:
// result: (Const8 [int64(int8(c) << uint64(d))])
}
// match: (Lsh8x64 (Rsh8Ux64 (Lsh8x64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Lsh8x64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Lsh8x64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpRsh8Ux64 {
}
v.reset(OpLsh8x64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
func rewriteValuegeneric_OpMul16_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul16 (Const16 [c]) (Const16 [d]))
// cond:
// result: (Const16 [int64(int16(c*d))])
}
// match: (Mul16 <t> n (Const16 [c]))
// cond: isPowerOfTwo(c)
- // result: (Lsh16x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh16x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v.reset(OpLsh16x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul16 <t> (Const16 [c]) n)
// cond: isPowerOfTwo(c)
- // result: (Lsh16x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh16x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpLsh16x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul16 <t> n (Const16 [c]))
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg16 (Lsh16x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg16 (Lsh16x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
n := v.Args[0]
v.reset(OpNeg16)
v0 := b.NewValue0(v.Pos, OpLsh16x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
}
// match: (Mul16 <t> (Const16 [c]) n)
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg16 (Lsh16x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg16 (Lsh16x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpNeg16)
v0 := b.NewValue0(v.Pos, OpLsh16x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuegeneric_OpMul32_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul32 (Const32 [c]) (Const32 [d]))
// cond:
// result: (Const32 [int64(int32(c*d))])
}
// match: (Mul32 <t> n (Const32 [c]))
// cond: isPowerOfTwo(c)
- // result: (Lsh32x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh32x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v.reset(OpLsh32x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul32 <t> (Const32 [c]) n)
// cond: isPowerOfTwo(c)
- // result: (Lsh32x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh32x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpLsh32x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul32 <t> n (Const32 [c]))
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg32 (Lsh32x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg32 (Lsh32x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
n := v.Args[0]
v.reset(OpNeg32)
v0 := b.NewValue0(v.Pos, OpLsh32x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
}
// match: (Mul32 <t> (Const32 [c]) n)
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg32 (Lsh32x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg32 (Lsh32x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpNeg32)
v0 := b.NewValue0(v.Pos, OpLsh32x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuegeneric_OpMul64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul64 (Const64 [c]) (Const64 [d]))
// cond:
// result: (Const64 [c*d])
}
// match: (Mul64 <t> n (Const64 [c]))
// cond: isPowerOfTwo(c)
- // result: (Lsh64x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh64x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v.reset(OpLsh64x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul64 <t> (Const64 [c]) n)
// cond: isPowerOfTwo(c)
- // result: (Lsh64x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh64x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpLsh64x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul64 <t> n (Const64 [c]))
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg64 (Lsh64x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg64 (Lsh64x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
n := v.Args[0]
v.reset(OpNeg64)
v0 := b.NewValue0(v.Pos, OpLsh64x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
}
// match: (Mul64 <t> (Const64 [c]) n)
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg64 (Lsh64x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg64 (Lsh64x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpNeg64)
v0 := b.NewValue0(v.Pos, OpLsh64x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuegeneric_OpMul8_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Mul8 (Const8 [c]) (Const8 [d]))
// cond:
// result: (Const8 [int64(int8(c*d))])
}
// match: (Mul8 <t> n (Const8 [c]))
// cond: isPowerOfTwo(c)
- // result: (Lsh8x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh8x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
n := v.Args[0]
v.reset(OpLsh8x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul8 <t> (Const8 [c]) n)
// cond: isPowerOfTwo(c)
- // result: (Lsh8x64 <t> n (Const64 <types.UInt64> [log2(c)]))
+ // result: (Lsh8x64 <t> n (Const64 <typ.UInt64> [log2(c)]))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpLsh8x64)
v.Type = t
v.AddArg(n)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = log2(c)
v.AddArg(v0)
return true
}
// match: (Mul8 <t> n (Const8 [c]))
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg8 (Lsh8x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg8 (Lsh8x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
n := v.Args[0]
v.reset(OpNeg8)
v0 := b.NewValue0(v.Pos, OpLsh8x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
}
// match: (Mul8 <t> (Const8 [c]) n)
// cond: t.IsSigned() && isPowerOfTwo(-c)
- // result: (Neg8 (Lsh8x64 <t> n (Const64 <types.UInt64> [log2(-c)])))
+ // result: (Neg8 (Lsh8x64 <t> n (Const64 <typ.UInt64> [log2(-c)])))
for {
t := v.Type
v_0 := v.Args[0]
v.reset(OpNeg8)
v0 := b.NewValue0(v.Pos, OpLsh8x64, t)
v0.AddArg(n)
- v1 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v1.AuxInt = log2(-c)
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuegeneric_OpNeqInter_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NeqInter x y)
// cond:
// result: (NeqPtr (ITab x) (ITab y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpNeqPtr)
- v0 := b.NewValue0(v.Pos, OpITab, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpITab, typ.BytePtr)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpITab, types.BytePtr)
+ v1 := b.NewValue0(v.Pos, OpITab, typ.BytePtr)
v1.AddArg(y)
v.AddArg(v1)
return true
func rewriteValuegeneric_OpNeqSlice_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (NeqSlice x y)
// cond:
// result: (NeqPtr (SlicePtr x) (SlicePtr y))
x := v.Args[0]
y := v.Args[1]
v.reset(OpNeqPtr)
- v0 := b.NewValue0(v.Pos, OpSlicePtr, types.BytePtr)
+ v0 := b.NewValue0(v.Pos, OpSlicePtr, typ.BytePtr)
v0.AddArg(x)
v.AddArg(v0)
- v1 := b.NewValue0(v.Pos, OpSlicePtr, types.BytePtr)
+ v1 := b.NewValue0(v.Pos, OpSlicePtr, typ.BytePtr)
v1.AddArg(y)
v.AddArg(v1)
return true
return true
}
// match: (OffPtr p [0])
- // cond: v.Type.Compare(p.Type) == CMPeq
+ // cond: v.Type.Compare(p.Type) == types.CMPeq
// result: p
for {
if v.AuxInt != 0 {
break
}
p := v.Args[0]
- if !(v.Type.Compare(p.Type) == CMPeq) {
+ if !(v.Type.Compare(p.Type) == types.CMPeq) {
break
}
v.reset(OpCopy)
_ = b
config := b.Func.Config
_ = config
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (PtrIndex <t> ptr idx)
// cond: config.PtrSize == 4
- // result: (AddPtr ptr (Mul32 <types.Int> idx (Const32 <types.Int> [t.ElemType().Size()])))
+ // result: (AddPtr ptr (Mul32 <typ.Int> idx (Const32 <typ.Int> [t.ElemType().Size()])))
for {
t := v.Type
ptr := v.Args[0]
}
v.reset(OpAddPtr)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMul32, types.Int)
+ v0 := b.NewValue0(v.Pos, OpMul32, typ.Int)
v0.AddArg(idx)
- v1 := b.NewValue0(v.Pos, OpConst32, types.Int)
+ v1 := b.NewValue0(v.Pos, OpConst32, typ.Int)
v1.AuxInt = t.ElemType().Size()
v0.AddArg(v1)
v.AddArg(v0)
}
// match: (PtrIndex <t> ptr idx)
// cond: config.PtrSize == 8
- // result: (AddPtr ptr (Mul64 <types.Int> idx (Const64 <types.Int> [t.ElemType().Size()])))
+ // result: (AddPtr ptr (Mul64 <typ.Int> idx (Const64 <typ.Int> [t.ElemType().Size()])))
for {
t := v.Type
ptr := v.Args[0]
}
v.reset(OpAddPtr)
v.AddArg(ptr)
- v0 := b.NewValue0(v.Pos, OpMul64, types.Int)
+ v0 := b.NewValue0(v.Pos, OpMul64, typ.Int)
v0.AddArg(idx)
- v1 := b.NewValue0(v.Pos, OpConst64, types.Int)
+ v1 := b.NewValue0(v.Pos, OpConst64, typ.Int)
v1.AuxInt = t.ElemType().Size()
v0.AddArg(v1)
v.AddArg(v0)
func rewriteValuegeneric_OpRsh16Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16Ux64 (Const16 [c]) (Const64 [d]))
// cond:
// result: (Const16 [int64(int16(uint16(c) >> uint64(d)))])
}
// match: (Rsh16Ux64 (Lsh16x64 (Rsh16Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Rsh16Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Rsh16Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh16x64 {
}
v.reset(OpRsh16Ux64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
}
// match: (Rsh16Ux64 (Lsh16x64 x (Const64 [8])) (Const64 [8]))
// cond:
- // result: (ZeroExt8to16 (Trunc16to8 <types.UInt8> x))
+ // result: (ZeroExt8to16 (Trunc16to8 <typ.UInt8> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh16x64 {
break
}
v.reset(OpZeroExt8to16)
- v0 := b.NewValue0(v.Pos, OpTrunc16to8, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpTrunc16to8, typ.UInt8)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpRsh16x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh16x64 (Const16 [c]) (Const64 [d]))
// cond:
// result: (Const16 [int64(int16(c) >> uint64(d))])
}
// match: (Rsh16x64 (Lsh16x64 x (Const64 [8])) (Const64 [8]))
// cond:
- // result: (SignExt8to16 (Trunc16to8 <types.Int8> x))
+ // result: (SignExt8to16 (Trunc16to8 <typ.Int8> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh16x64 {
break
}
v.reset(OpSignExt8to16)
- v0 := b.NewValue0(v.Pos, OpTrunc16to8, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpTrunc16to8, typ.Int8)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpRsh32Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32Ux64 (Const32 [c]) (Const64 [d]))
// cond:
// result: (Const32 [int64(int32(uint32(c) >> uint64(d)))])
}
// match: (Rsh32Ux64 (Lsh32x64 (Rsh32Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Rsh32Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Rsh32Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh32x64 {
}
v.reset(OpRsh32Ux64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
}
// match: (Rsh32Ux64 (Lsh32x64 x (Const64 [24])) (Const64 [24]))
// cond:
- // result: (ZeroExt8to32 (Trunc32to8 <types.UInt8> x))
+ // result: (ZeroExt8to32 (Trunc32to8 <typ.UInt8> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh32x64 {
break
}
v.reset(OpZeroExt8to32)
- v0 := b.NewValue0(v.Pos, OpTrunc32to8, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpTrunc32to8, typ.UInt8)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh32Ux64 (Lsh32x64 x (Const64 [16])) (Const64 [16]))
// cond:
- // result: (ZeroExt16to32 (Trunc32to16 <types.UInt16> x))
+ // result: (ZeroExt16to32 (Trunc32to16 <typ.UInt16> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh32x64 {
break
}
v.reset(OpZeroExt16to32)
- v0 := b.NewValue0(v.Pos, OpTrunc32to16, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpTrunc32to16, typ.UInt16)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpRsh32x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh32x64 (Const32 [c]) (Const64 [d]))
// cond:
// result: (Const32 [int64(int32(c) >> uint64(d))])
}
// match: (Rsh32x64 (Lsh32x64 x (Const64 [24])) (Const64 [24]))
// cond:
- // result: (SignExt8to32 (Trunc32to8 <types.Int8> x))
+ // result: (SignExt8to32 (Trunc32to8 <typ.Int8> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh32x64 {
break
}
v.reset(OpSignExt8to32)
- v0 := b.NewValue0(v.Pos, OpTrunc32to8, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpTrunc32to8, typ.Int8)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh32x64 (Lsh32x64 x (Const64 [16])) (Const64 [16]))
// cond:
- // result: (SignExt16to32 (Trunc32to16 <types.Int16> x))
+ // result: (SignExt16to32 (Trunc32to16 <typ.Int16> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh32x64 {
break
}
v.reset(OpSignExt16to32)
- v0 := b.NewValue0(v.Pos, OpTrunc32to16, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpTrunc32to16, typ.Int16)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpRsh64Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64Ux64 (Const64 [c]) (Const64 [d]))
// cond:
// result: (Const64 [int64(uint64(c) >> uint64(d))])
}
// match: (Rsh64Ux64 (Lsh64x64 (Rsh64Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Rsh64Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Rsh64Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
}
v.reset(OpRsh64Ux64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
}
// match: (Rsh64Ux64 (Lsh64x64 x (Const64 [56])) (Const64 [56]))
// cond:
- // result: (ZeroExt8to64 (Trunc64to8 <types.UInt8> x))
+ // result: (ZeroExt8to64 (Trunc64to8 <typ.UInt8> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
break
}
v.reset(OpZeroExt8to64)
- v0 := b.NewValue0(v.Pos, OpTrunc64to8, types.UInt8)
+ v0 := b.NewValue0(v.Pos, OpTrunc64to8, typ.UInt8)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh64Ux64 (Lsh64x64 x (Const64 [48])) (Const64 [48]))
// cond:
- // result: (ZeroExt16to64 (Trunc64to16 <types.UInt16> x))
+ // result: (ZeroExt16to64 (Trunc64to16 <typ.UInt16> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
break
}
v.reset(OpZeroExt16to64)
- v0 := b.NewValue0(v.Pos, OpTrunc64to16, types.UInt16)
+ v0 := b.NewValue0(v.Pos, OpTrunc64to16, typ.UInt16)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh64Ux64 (Lsh64x64 x (Const64 [32])) (Const64 [32]))
// cond:
- // result: (ZeroExt32to64 (Trunc64to32 <types.UInt32> x))
+ // result: (ZeroExt32to64 (Trunc64to32 <typ.UInt32> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
break
}
v.reset(OpZeroExt32to64)
- v0 := b.NewValue0(v.Pos, OpTrunc64to32, types.UInt32)
+ v0 := b.NewValue0(v.Pos, OpTrunc64to32, typ.UInt32)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpRsh64x64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh64x64 (Const64 [c]) (Const64 [d]))
// cond:
// result: (Const64 [c >> uint64(d)])
}
// match: (Rsh64x64 (Lsh64x64 x (Const64 [56])) (Const64 [56]))
// cond:
- // result: (SignExt8to64 (Trunc64to8 <types.Int8> x))
+ // result: (SignExt8to64 (Trunc64to8 <typ.Int8> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
break
}
v.reset(OpSignExt8to64)
- v0 := b.NewValue0(v.Pos, OpTrunc64to8, types.Int8)
+ v0 := b.NewValue0(v.Pos, OpTrunc64to8, typ.Int8)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh64x64 (Lsh64x64 x (Const64 [48])) (Const64 [48]))
// cond:
- // result: (SignExt16to64 (Trunc64to16 <types.Int16> x))
+ // result: (SignExt16to64 (Trunc64to16 <typ.Int16> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
break
}
v.reset(OpSignExt16to64)
- v0 := b.NewValue0(v.Pos, OpTrunc64to16, types.Int16)
+ v0 := b.NewValue0(v.Pos, OpTrunc64to16, typ.Int16)
v0.AddArg(x)
v.AddArg(v0)
return true
}
// match: (Rsh64x64 (Lsh64x64 x (Const64 [32])) (Const64 [32]))
// cond:
- // result: (SignExt32to64 (Trunc64to32 <types.Int32> x))
+ // result: (SignExt32to64 (Trunc64to32 <typ.Int32> x))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh64x64 {
break
}
v.reset(OpSignExt32to64)
- v0 := b.NewValue0(v.Pos, OpTrunc64to32, types.Int32)
+ v0 := b.NewValue0(v.Pos, OpTrunc64to32, typ.Int32)
v0.AddArg(x)
v.AddArg(v0)
return true
func rewriteValuegeneric_OpRsh8Ux64_0(v *Value) bool {
b := v.Block
_ = b
- types := &b.Func.Config.Types
- _ = types
+ typ := &b.Func.Config.Types
+ _ = typ
// match: (Rsh8Ux64 (Const8 [c]) (Const64 [d]))
// cond:
// result: (Const8 [int64(int8(uint8(c) >> uint64(d)))])
}
// match: (Rsh8Ux64 (Lsh8x64 (Rsh8Ux64 x (Const64 [c1])) (Const64 [c2])) (Const64 [c3]))
// cond: uint64(c1) >= uint64(c2) && uint64(c3) >= uint64(c2) && !uaddOvf(c1-c2, c3)
- // result: (Rsh8Ux64 x (Const64 <types.UInt64> [c1-c2+c3]))
+ // result: (Rsh8Ux64 x (Const64 <typ.UInt64> [c1-c2+c3]))
for {
v_0 := v.Args[0]
if v_0.Op != OpLsh8x64 {
}
v.reset(OpRsh8Ux64)
v.AddArg(x)
- v0 := b.NewValue0(v.Pos, OpConst64, types.UInt64)
+ v0 := b.NewValue0(v.Pos, OpConst64, typ.UInt64)
v0.AuxInt = c1 - c2 + c3
v.AddArg(v0)
return true
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(f1)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v1.Aux = t.FieldType(0)
v2 := b.NewValue0(v.Pos, OpOffPtr, t.FieldType(0).PtrTo())
v2.AuxInt = 0
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(f2)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v1.Aux = t.FieldType(1)
v2 := b.NewValue0(v.Pos, OpOffPtr, t.FieldType(1).PtrTo())
v2.AuxInt = t.FieldOff(1)
v2.AddArg(dst)
v1.AddArg(v2)
v1.AddArg(f1)
- v3 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v3.Aux = t.FieldType(0)
v4 := b.NewValue0(v.Pos, OpOffPtr, t.FieldType(0).PtrTo())
v4.AuxInt = 0
v0.AddArg(dst)
v.AddArg(v0)
v.AddArg(f3)
- v1 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v1 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v1.Aux = t.FieldType(2)
v2 := b.NewValue0(v.Pos, OpOffPtr, t.FieldType(2).PtrTo())
v2.AuxInt = t.FieldOff(2)
v2.AddArg(dst)
v1.AddArg(v2)
v1.AddArg(f2)
- v3 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v3 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v3.Aux = t.FieldType(1)
v4 := b.NewValue0(v.Pos, OpOffPtr, t.FieldType(1).PtrTo())
v4.AuxInt = t.FieldOff(1)
v4.AddArg(dst)
v3.AddArg(v4)
v3.AddArg(f1)
- v5 := b.NewValue0(v.Pos, OpStore, TypeMem)
+ v5 := b.NewValue0(v.Pos, OpStore, types.TypeMem)
v5.Aux = t.FieldType(0)
v6 := b.NewValue0(v.Pos, OpOffPtr, t.FieldType(0).PtrTo())
v6.AuxInt = 0
return true
}
// match: (Store {t} dst (Load src mem) mem)
- // cond: !fe.CanSSA(t.(Type))
- // result: (Move {t} [t.(Type).Size()] dst src mem)
+ // cond: !fe.CanSSA(t.(*types.Type))
+ // result: (Move {t} [t.(*types.Type).Size()] dst src mem)
for {
t := v.Aux
dst := v.Args[0]
if mem != v.Args[2] {
break
}
- if !(!fe.CanSSA(t.(Type))) {
+ if !(!fe.CanSSA(t.(*types.Type))) {
break
}
v.reset(OpMove)
- v.AuxInt = t.(Type).Size()
+ v.AuxInt = t.(*types.Type).Size()
v.Aux = t
v.AddArg(dst)
v.AddArg(src)
return true
}
// match: (Store {t} dst (Load src mem) (VarDef {x} mem))
- // cond: !fe.CanSSA(t.(Type))
- // result: (Move {t} [t.(Type).Size()] dst src (VarDef {x} mem))
+ // cond: !fe.CanSSA(t.(*types.Type))
+ // result: (Move {t} [t.(*types.Type).Size()] dst src (VarDef {x} mem))
for {
t := v.Aux
dst := v.Args[0]
if mem != v_2.Args[0] {
break
}
- if !(!fe.CanSSA(t.(Type))) {
+ if !(!fe.CanSSA(t.(*types.Type))) {
break
}
v.reset(OpMove)
- v.AuxInt = t.(Type).Size()
+ v.AuxInt = t.(*types.Type).Size()
v.Aux = t
v.AddArg(dst)
v.AddArg(src)
- v0 := b.NewValue0(v.Pos, OpVarDef, TypeMem)
+ v0 := b.NewValue0(v.Pos, OpVarDef, types.TypeMem)
v0.Aux = x
v0.AddArg(mem)
v.AddArg(v0)
_ = config
fe := b.Func.fe
_ = fe
- types := &config.Types
- _ = types
+ typ := &config.Types
+ _ = typ
switch b.Kind {
case BlockIf:
// match: (If (Not cond) yes no)
package ssa
-import "testing"
+import (
+ "cmd/compile/internal/types"
+ "testing"
+)
func TestSchedule(t *testing.T) {
c := testConfig(t)
cases := []fun{
c.Fun("entry",
Bloc("entry",
- Valu("mem0", OpInitMem, TypeMem, 0, nil),
- Valu("ptr", OpConst64, TypeInt64, 0xABCD, nil),
- Valu("v", OpConst64, TypeInt64, 12, nil),
- Valu("mem1", OpStore, TypeMem, 0, TypeInt64, "ptr", "v", "mem0"),
- Valu("mem2", OpStore, TypeMem, 0, TypeInt64, "ptr", "v", "mem1"),
- Valu("mem3", OpStore, TypeMem, 0, TypeInt64, "ptr", "sum", "mem2"),
- Valu("l1", OpLoad, TypeInt64, 0, nil, "ptr", "mem1"),
- Valu("l2", OpLoad, TypeInt64, 0, nil, "ptr", "mem2"),
- Valu("sum", OpAdd64, TypeInt64, 0, nil, "l1", "l2"),
+ Valu("mem0", OpInitMem, types.TypeMem, 0, nil),
+ Valu("ptr", OpConst64, c.config.Types.Int64, 0xABCD, nil),
+ Valu("v", OpConst64, c.config.Types.Int64, 12, nil),
+ Valu("mem1", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ptr", "v", "mem0"),
+ Valu("mem2", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ptr", "v", "mem1"),
+ Valu("mem3", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ptr", "sum", "mem2"),
+ Valu("l1", OpLoad, c.config.Types.Int64, 0, nil, "ptr", "mem1"),
+ Valu("l2", OpLoad, c.config.Types.Int64, 0, nil, "ptr", "mem2"),
+ Valu("sum", OpAdd64, c.config.Types.Int64, 0, nil, "l1", "l2"),
Goto("exit")),
Bloc("exit",
Exit("mem3"))),
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem0", OpInitMem, TypeMem, 0, nil),
- Valu("a", OpAdd64, TypeInt64, 0, nil, "b", "c"), // v2
- Valu("b", OpLoad, TypeInt64, 0, nil, "ptr", "mem1"), // v3
- Valu("c", OpNeg64, TypeInt64, 0, nil, "b"), // v4
- Valu("mem1", OpStore, TypeMem, 0, TypeInt64, "ptr", "v", "mem0"), // v5
- Valu("mem2", OpStore, TypeMem, 0, TypeInt64, "ptr", "a", "mem1"),
- Valu("ptr", OpConst64, TypeInt64, 0xABCD, nil),
- Valu("v", OpConst64, TypeInt64, 12, nil),
+ Valu("mem0", OpInitMem, types.TypeMem, 0, nil),
+ Valu("a", OpAdd64, c.config.Types.Int64, 0, nil, "b", "c"), // v2
+ Valu("b", OpLoad, c.config.Types.Int64, 0, nil, "ptr", "mem1"), // v3
+ Valu("c", OpNeg64, c.config.Types.Int64, 0, nil, "b"), // v4
+ Valu("mem1", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ptr", "v", "mem0"), // v5
+ Valu("mem2", OpStore, types.TypeMem, 0, c.config.Types.Int64, "ptr", "a", "mem1"),
+ Valu("ptr", OpConst64, c.config.Types.Int64, 0xABCD, nil),
+ Valu("v", OpConst64, c.config.Types.Int64, 12, nil),
Goto("exit")),
Bloc("exit",
Exit("mem2")))
package ssa
import (
+ "cmd/compile/internal/types"
"testing"
)
func TestShiftConstAMD64(t *testing.T) {
c := testConfig(t)
- fun := makeConstShiftFunc(c, 18, OpLsh64x64, TypeUInt64)
+ fun := makeConstShiftFunc(c, 18, OpLsh64x64, c.config.Types.UInt64)
checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHLQconst: 1, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
- fun = makeConstShiftFunc(c, 66, OpLsh64x64, TypeUInt64)
+ fun = makeConstShiftFunc(c, 66, OpLsh64x64, c.config.Types.UInt64)
checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHLQconst: 0, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
- fun = makeConstShiftFunc(c, 18, OpRsh64Ux64, TypeUInt64)
+ fun = makeConstShiftFunc(c, 18, OpRsh64Ux64, c.config.Types.UInt64)
checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHRQconst: 1, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
- fun = makeConstShiftFunc(c, 66, OpRsh64Ux64, TypeUInt64)
+ fun = makeConstShiftFunc(c, 66, OpRsh64Ux64, c.config.Types.UInt64)
checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SHRQconst: 0, OpAMD64CMPQconst: 0, OpAMD64ANDQconst: 0})
- fun = makeConstShiftFunc(c, 18, OpRsh64x64, TypeInt64)
+ fun = makeConstShiftFunc(c, 18, OpRsh64x64, c.config.Types.Int64)
checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SARQconst: 1, OpAMD64CMPQconst: 0})
- fun = makeConstShiftFunc(c, 66, OpRsh64x64, TypeInt64)
+ fun = makeConstShiftFunc(c, 66, OpRsh64x64, c.config.Types.Int64)
checkOpcodeCounts(t, fun.f, map[Op]int{OpAMD64SARQconst: 1, OpAMD64CMPQconst: 0})
}
-func makeConstShiftFunc(c *Conf, amount int64, op Op, typ Type) fun {
- ptyp := &TypeImpl{Size_: 8, Ptr: true, Name: "ptr"}
+func makeConstShiftFunc(c *Conf, amount int64, op Op, typ *types.Type) fun {
+ ptyp := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("SP", OpSP, TypeUInt64, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("SP", OpSP, c.config.Types.UInt64, 0, nil),
Valu("argptr", OpOffPtr, ptyp, 8, nil, "SP"),
Valu("resptr", OpOffPtr, ptyp, 16, nil, "SP"),
Valu("load", OpLoad, typ, 0, nil, "argptr", "mem"),
- Valu("c", OpConst64, TypeUInt64, amount, nil),
+ Valu("c", OpConst64, c.config.Types.UInt64, amount, nil),
Valu("shift", op, typ, 0, nil, "load", "c"),
- Valu("store", OpStore, TypeMem, 0, TypeUInt64, "resptr", "shift", "mem"),
+ Valu("store", OpStore, types.TypeMem, 0, c.config.Types.UInt64, "resptr", "shift", "mem"),
Exit("store")))
Compile(fun.f)
return fun
}
func TestShiftToExtensionAMD64(t *testing.T) {
+ c := testConfig(t)
// Test that eligible pairs of constant shifts are converted to extensions.
// For example:
// (uint64(x) << 32) >> 32 -> uint64(uint32(x))
tests := [...]struct {
amount int64
left, right Op
- typ Type
+ typ *types.Type
}{
// unsigned
- {56, OpLsh64x64, OpRsh64Ux64, TypeUInt64},
- {48, OpLsh64x64, OpRsh64Ux64, TypeUInt64},
- {32, OpLsh64x64, OpRsh64Ux64, TypeUInt64},
- {24, OpLsh32x64, OpRsh32Ux64, TypeUInt32},
- {16, OpLsh32x64, OpRsh32Ux64, TypeUInt32},
- {8, OpLsh16x64, OpRsh16Ux64, TypeUInt16},
+ {56, OpLsh64x64, OpRsh64Ux64, c.config.Types.UInt64},
+ {48, OpLsh64x64, OpRsh64Ux64, c.config.Types.UInt64},
+ {32, OpLsh64x64, OpRsh64Ux64, c.config.Types.UInt64},
+ {24, OpLsh32x64, OpRsh32Ux64, c.config.Types.UInt32},
+ {16, OpLsh32x64, OpRsh32Ux64, c.config.Types.UInt32},
+ {8, OpLsh16x64, OpRsh16Ux64, c.config.Types.UInt16},
// signed
- {56, OpLsh64x64, OpRsh64x64, TypeInt64},
- {48, OpLsh64x64, OpRsh64x64, TypeInt64},
- {32, OpLsh64x64, OpRsh64x64, TypeInt64},
- {24, OpLsh32x64, OpRsh32x64, TypeInt32},
- {16, OpLsh32x64, OpRsh32x64, TypeInt32},
- {8, OpLsh16x64, OpRsh16x64, TypeInt16},
+ {56, OpLsh64x64, OpRsh64x64, c.config.Types.Int64},
+ {48, OpLsh64x64, OpRsh64x64, c.config.Types.Int64},
+ {32, OpLsh64x64, OpRsh64x64, c.config.Types.Int64},
+ {24, OpLsh32x64, OpRsh32x64, c.config.Types.Int32},
+ {16, OpLsh32x64, OpRsh32x64, c.config.Types.Int32},
+ {8, OpLsh16x64, OpRsh16x64, c.config.Types.Int16},
}
- c := testConfig(t)
for _, tc := range tests {
fun := makeShiftExtensionFunc(c, tc.amount, tc.left, tc.right, tc.typ)
checkOpcodeCounts(t, fun.f, ops)
// (rshift (lshift (Const64 [amount])) (Const64 [amount]))
//
// This may be equivalent to a sign or zero extension.
-func makeShiftExtensionFunc(c *Conf, amount int64, lshift, rshift Op, typ Type) fun {
- ptyp := &TypeImpl{Size_: 8, Ptr: true, Name: "ptr"}
+func makeShiftExtensionFunc(c *Conf, amount int64, lshift, rshift Op, typ *types.Type) fun {
+ ptyp := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("SP", OpSP, TypeUInt64, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("SP", OpSP, c.config.Types.UInt64, 0, nil),
Valu("argptr", OpOffPtr, ptyp, 8, nil, "SP"),
Valu("resptr", OpOffPtr, ptyp, 16, nil, "SP"),
Valu("load", OpLoad, typ, 0, nil, "argptr", "mem"),
- Valu("c", OpConst64, TypeUInt64, amount, nil),
+ Valu("c", OpConst64, c.config.Types.UInt64, amount, nil),
Valu("lshift", lshift, typ, 0, nil, "load", "c"),
Valu("rshift", rshift, typ, 0, nil, "lshift", "c"),
- Valu("store", OpStore, TypeMem, 0, TypeUInt64, "resptr", "rshift", "mem"),
+ Valu("store", OpStore, types.TypeMem, 0, c.config.Types.UInt64, "resptr", "rshift", "mem"),
Exit("store")))
Compile(fun.f)
return fun
package ssa
-import "testing"
+import (
+ "cmd/compile/internal/types"
+ "testing"
+)
func TestShortCircuit(t *testing.T) {
c := testConfig(t)
fun := c.Fun("entry",
Bloc("entry",
- Valu("mem", OpInitMem, TypeMem, 0, nil),
- Valu("arg1", OpArg, TypeInt64, 0, nil),
- Valu("arg2", OpArg, TypeInt64, 0, nil),
- Valu("arg3", OpArg, TypeInt64, 0, nil),
+ Valu("mem", OpInitMem, types.TypeMem, 0, nil),
+ Valu("arg1", OpArg, c.config.Types.Int64, 0, nil),
+ Valu("arg2", OpArg, c.config.Types.Int64, 0, nil),
+ Valu("arg3", OpArg, c.config.Types.Int64, 0, nil),
Goto("b1")),
Bloc("b1",
- Valu("cmp1", OpLess64, TypeBool, 0, nil, "arg1", "arg2"),
+ Valu("cmp1", OpLess64, c.config.Types.Bool, 0, nil, "arg1", "arg2"),
If("cmp1", "b2", "b3")),
Bloc("b2",
- Valu("cmp2", OpLess64, TypeBool, 0, nil, "arg2", "arg3"),
+ Valu("cmp2", OpLess64, c.config.Types.Bool, 0, nil, "arg2", "arg3"),
Goto("b3")),
Bloc("b3",
- Valu("phi2", OpPhi, TypeBool, 0, nil, "cmp1", "cmp2"),
+ Valu("phi2", OpPhi, c.config.Types.Bool, 0, nil, "cmp1", "cmp2"),
If("phi2", "b4", "b5")),
Bloc("b4",
- Valu("cmp3", OpLess64, TypeBool, 0, nil, "arg3", "arg1"),
+ Valu("cmp3", OpLess64, c.config.Types.Bool, 0, nil, "arg3", "arg1"),
Goto("b5")),
Bloc("b5",
- Valu("phi3", OpPhi, TypeBool, 0, nil, "phi2", "cmp3"),
+ Valu("phi3", OpPhi, c.config.Types.Bool, 0, nil, "phi2", "cmp3"),
If("phi3", "b6", "b7")),
Bloc("b6",
Exit("mem")),
_32bit uintptr // size on 32bit platforms
_64bit uintptr // size on 64bit platforms
}{
- {Value{}, 72, 120},
+ {Value{}, 68, 112},
{Block{}, 152, 288},
}
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
)
}
type stackValState struct {
- typ Type
+ typ *types.Type
spill *Value
needSlot bool
}
// TODO: share slots among equivalent types. We would need to
// only share among types with the same GC signature. See the
// type.Equal calls below for where this matters.
- locations := map[Type][]LocalSlot{}
+ locations := map[*types.Type][]LocalSlot{}
// Each time we assign a stack slot to a value v, we remember
// the slot we used via an index into locations[v.Type].
} else {
name = names[v.ID]
}
- if name.N != nil && v.Type.Compare(name.Type) == CMPeq {
+ if name.N != nil && v.Type.Compare(name.Type) == types.CMPeq {
for _, id := range s.interfere[v.ID] {
h := f.getHome(id)
if h != nil && h.(LocalSlot).N == name.N && h.(LocalSlot).Off == name.Off {
if s.values[v.ID].needSlot {
live.remove(v.ID)
for _, id := range live.contents() {
- if s.values[v.ID].typ.Compare(s.values[id].typ) == CMPeq {
+ if s.values[v.ID].typ.Compare(s.values[id].typ) == types.CMPeq {
s.interfere[v.ID] = append(s.interfere[v.ID], id)
s.interfere[id] = append(s.interfere[id], v.ID)
}
+++ /dev/null
-// Copyright 2015 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package ssa
-
-import "cmd/internal/obj"
-
-// TODO: use go/types instead?
-
-// A type interface used to import cmd/internal/gc:Type
-// Type instances are not guaranteed to be canonical.
-type Type interface {
- Size() int64 // return the size in bytes
- Alignment() int64
-
- IsBoolean() bool // is a named or unnamed boolean type
- IsInteger() bool // ... ditto for the others
- IsSigned() bool
- IsFloat() bool
- IsComplex() bool
- IsPtrShaped() bool
- IsString() bool
- IsSlice() bool
- IsArray() bool
- IsStruct() bool
- IsInterface() bool
-
- IsMemory() bool // special ssa-package-only types
- IsFlags() bool
- IsVoid() bool
- IsTuple() bool
-
- ElemType() Type // given []T or *T or [n]T, return T
- PtrTo() Type // given T, return *T
-
- NumFields() int // # of fields of a struct
- FieldType(i int) Type // type of ith field of the struct or ith part of a tuple
- FieldOff(i int) int64 // offset of ith field of the struct
- FieldName(i int) string // name of ith field of the struct
-
- NumElem() int64 // # of elements of an array
-
- HasPointer() bool // has heap pointer
-
- String() string
- SimpleString() string // a coarser generic description of T, e.g. T's underlying type
- Compare(Type) Cmp // compare types, returning one of CMPlt, CMPeq, CMPgt.
- Symbol() *obj.LSym // the symbol of the type
-}
-
-// Special compiler-only types.
-type CompilerType struct {
- Name string
- size int64
- Memory bool
- Flags bool
- Void bool
- Int128 bool
-}
-
-func (t *CompilerType) Size() int64 { return t.size } // Size in bytes
-func (t *CompilerType) Alignment() int64 { return 0 }
-func (t *CompilerType) IsBoolean() bool { return false }
-func (t *CompilerType) IsInteger() bool { return false }
-func (t *CompilerType) IsSigned() bool { return false }
-func (t *CompilerType) IsFloat() bool { return false }
-func (t *CompilerType) IsComplex() bool { return false }
-func (t *CompilerType) IsPtrShaped() bool { return false }
-func (t *CompilerType) IsString() bool { return false }
-func (t *CompilerType) IsSlice() bool { return false }
-func (t *CompilerType) IsArray() bool { return false }
-func (t *CompilerType) IsStruct() bool { return false }
-func (t *CompilerType) IsInterface() bool { return false }
-func (t *CompilerType) IsMemory() bool { return t.Memory }
-func (t *CompilerType) IsFlags() bool { return t.Flags }
-func (t *CompilerType) IsVoid() bool { return t.Void }
-func (t *CompilerType) IsTuple() bool { return false }
-func (t *CompilerType) String() string { return t.Name }
-func (t *CompilerType) SimpleString() string { return t.Name }
-func (t *CompilerType) ElemType() Type { panic("not implemented") }
-func (t *CompilerType) PtrTo() Type { panic("not implemented") }
-func (t *CompilerType) NumFields() int { panic("not implemented") }
-func (t *CompilerType) FieldType(i int) Type { panic("not implemented") }
-func (t *CompilerType) FieldOff(i int) int64 { panic("not implemented") }
-func (t *CompilerType) FieldName(i int) string { panic("not implemented") }
-func (t *CompilerType) NumElem() int64 { panic("not implemented") }
-func (t *CompilerType) HasPointer() bool { panic("not implemented") }
-func (t *CompilerType) Symbol() *obj.LSym { panic("not implemented") }
-
-type TupleType struct {
- first Type
- second Type
- // Any tuple with a memory type must put that memory type second.
-}
-
-func (t *TupleType) Size() int64 { panic("not implemented") }
-func (t *TupleType) Alignment() int64 { panic("not implemented") }
-func (t *TupleType) IsBoolean() bool { return false }
-func (t *TupleType) IsInteger() bool { return false }
-func (t *TupleType) IsSigned() bool { return false }
-func (t *TupleType) IsFloat() bool { return false }
-func (t *TupleType) IsComplex() bool { return false }
-func (t *TupleType) IsPtrShaped() bool { return false }
-func (t *TupleType) IsString() bool { return false }
-func (t *TupleType) IsSlice() bool { return false }
-func (t *TupleType) IsArray() bool { return false }
-func (t *TupleType) IsStruct() bool { return false }
-func (t *TupleType) IsInterface() bool { return false }
-func (t *TupleType) IsMemory() bool { return false }
-func (t *TupleType) IsFlags() bool { return false }
-func (t *TupleType) IsVoid() bool { return false }
-func (t *TupleType) IsTuple() bool { return true }
-func (t *TupleType) String() string { return t.first.String() + "," + t.second.String() }
-func (t *TupleType) SimpleString() string { return "Tuple" }
-func (t *TupleType) ElemType() Type { panic("not implemented") }
-func (t *TupleType) PtrTo() Type { panic("not implemented") }
-func (t *TupleType) NumFields() int { panic("not implemented") }
-func (t *TupleType) FieldType(i int) Type {
- switch i {
- case 0:
- return t.first
- case 1:
- return t.second
- default:
- panic("bad tuple index")
- }
-}
-func (t *TupleType) FieldOff(i int) int64 { panic("not implemented") }
-func (t *TupleType) FieldName(i int) string { panic("not implemented") }
-func (t *TupleType) NumElem() int64 { panic("not implemented") }
-func (t *TupleType) HasPointer() bool { panic("not implemented") }
-func (t *TupleType) Symbol() *obj.LSym { panic("not implemented") }
-
-// Cmp is a comparison between values a and b.
-// -1 if a < b
-// 0 if a == b
-// 1 if a > b
-type Cmp int8
-
-const (
- CMPlt = Cmp(-1)
- CMPeq = Cmp(0)
- CMPgt = Cmp(1)
-)
-
-func (t *CompilerType) Compare(u Type) Cmp {
- x, ok := u.(*CompilerType)
- // ssa.CompilerType is smaller than any other type
- if !ok {
- return CMPlt
- }
- if t == x {
- return CMPeq
- }
- // desire fast sorting, not pretty sorting.
- if len(t.Name) == len(x.Name) {
- if t.Name == x.Name {
- return CMPeq
- }
- if t.Name < x.Name {
- return CMPlt
- }
- return CMPgt
- }
- if len(t.Name) > len(x.Name) {
- return CMPgt
- }
- return CMPlt
-}
-
-func (t *TupleType) Compare(u Type) Cmp {
- // ssa.TupleType is greater than ssa.CompilerType
- if _, ok := u.(*CompilerType); ok {
- return CMPgt
- }
- // ssa.TupleType is smaller than any other type
- x, ok := u.(*TupleType)
- if !ok {
- return CMPlt
- }
- if t == x {
- return CMPeq
- }
- if c := t.first.Compare(x.first); c != CMPeq {
- return c
- }
- return t.second.Compare(x.second)
-}
-
-var (
- TypeInvalid = &CompilerType{Name: "invalid"}
- TypeMem = &CompilerType{Name: "mem", Memory: true}
- TypeFlags = &CompilerType{Name: "flags", Flags: true}
- TypeVoid = &CompilerType{Name: "void", Void: true}
- TypeInt128 = &CompilerType{Name: "int128", size: 16, Int128: true}
-)
-
-func MakeTuple(t0, t1 Type) *TupleType {
- return &TupleType{first: t0, second: t1}
-}
+++ /dev/null
-// Copyright 2015 The Go Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style
-// license that can be found in the LICENSE file.
-
-package ssa
-
-import "cmd/internal/obj"
-
-// Stub implementation used for testing.
-type TypeImpl struct {
- Size_ int64
- Align int64
- Boolean bool
- Integer bool
- Signed bool
- Float bool
- Complex bool
- Ptr bool
- string bool
- slice bool
- array bool
- struct_ bool
- inter bool
- Elem_ Type
-
- Name string
-}
-
-func (t *TypeImpl) Size() int64 { return t.Size_ }
-func (t *TypeImpl) Alignment() int64 { return t.Align }
-func (t *TypeImpl) IsBoolean() bool { return t.Boolean }
-func (t *TypeImpl) IsInteger() bool { return t.Integer }
-func (t *TypeImpl) IsSigned() bool { return t.Signed }
-func (t *TypeImpl) IsFloat() bool { return t.Float }
-func (t *TypeImpl) IsComplex() bool { return t.Complex }
-func (t *TypeImpl) IsPtrShaped() bool { return t.Ptr }
-func (t *TypeImpl) IsString() bool { return t.string }
-func (t *TypeImpl) IsSlice() bool { return t.slice }
-func (t *TypeImpl) IsArray() bool { return t.array }
-func (t *TypeImpl) IsStruct() bool { return t.struct_ }
-func (t *TypeImpl) IsInterface() bool { return t.inter }
-func (t *TypeImpl) IsMemory() bool { return false }
-func (t *TypeImpl) IsFlags() bool { return false }
-func (t *TypeImpl) IsTuple() bool { return false }
-func (t *TypeImpl) IsVoid() bool { return false }
-func (t *TypeImpl) String() string { return t.Name }
-func (t *TypeImpl) SimpleString() string { return t.Name }
-func (t *TypeImpl) ElemType() Type { return t.Elem_ }
-func (t *TypeImpl) PtrTo() Type { return TypeBytePtr }
-func (t *TypeImpl) NumFields() int { panic("not implemented") }
-func (t *TypeImpl) FieldType(i int) Type { panic("not implemented") }
-func (t *TypeImpl) FieldOff(i int) int64 { panic("not implemented") }
-func (t *TypeImpl) FieldName(i int) string { panic("not implemented") }
-func (t *TypeImpl) NumElem() int64 { panic("not implemented") }
-func (t *TypeImpl) HasPointer() bool { return t.Ptr }
-func (t *TypeImpl) Symbol() *obj.LSym { panic("not implemented") }
-
-func (t *TypeImpl) Equal(u Type) bool {
- x, ok := u.(*TypeImpl)
- if !ok {
- return false
- }
- return x == t
-}
-
-func (t *TypeImpl) Compare(u Type) Cmp {
- x, ok := u.(*TypeImpl)
- // ssa.CompilerType < ssa.TypeImpl < gc.Type
- if !ok {
- _, ok := u.(*CompilerType)
- if ok {
- return CMPgt
- }
- return CMPlt
- }
- if t == x {
- return CMPeq
- }
- if t.Name < x.Name {
- return CMPlt
- }
- if t.Name > x.Name {
- return CMPgt
- }
- return CMPeq
-
-}
-
-var (
- // shortcuts for commonly used basic types
- TypeInt8 = &TypeImpl{Size_: 1, Align: 1, Integer: true, Signed: true, Name: "int8"}
- TypeInt16 = &TypeImpl{Size_: 2, Align: 2, Integer: true, Signed: true, Name: "int16"}
- TypeInt32 = &TypeImpl{Size_: 4, Align: 4, Integer: true, Signed: true, Name: "int32"}
- TypeInt64 = &TypeImpl{Size_: 8, Align: 8, Integer: true, Signed: true, Name: "int64"}
- TypeFloat32 = &TypeImpl{Size_: 4, Align: 4, Float: true, Name: "float32"}
- TypeFloat64 = &TypeImpl{Size_: 8, Align: 8, Float: true, Name: "float64"}
- TypeComplex64 = &TypeImpl{Size_: 8, Align: 4, Complex: true, Name: "complex64"}
- TypeComplex128 = &TypeImpl{Size_: 16, Align: 8, Complex: true, Name: "complex128"}
- TypeUInt8 = &TypeImpl{Size_: 1, Align: 1, Integer: true, Name: "uint8"}
- TypeUInt16 = &TypeImpl{Size_: 2, Align: 2, Integer: true, Name: "uint16"}
- TypeUInt32 = &TypeImpl{Size_: 4, Align: 4, Integer: true, Name: "uint32"}
- TypeUInt64 = &TypeImpl{Size_: 8, Align: 8, Integer: true, Name: "uint64"}
- TypeBool = &TypeImpl{Size_: 1, Align: 1, Boolean: true, Name: "bool"}
- TypeBytePtr = &TypeImpl{Size_: 8, Align: 8, Ptr: true, Name: "*byte"}
- TypeInt64Ptr = &TypeImpl{Size_: 8, Align: 8, Ptr: true, Name: "*int64"}
-)
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/src"
"fmt"
// The type of this value. Normally this will be a Go type, but there
// are a few other pseudo-types, see type.go.
- Type Type
+ Type *types.Type
// Auxiliary info for this value. The type of this information depends on the opcode and type.
// AuxInt is used for integer values, Aux is used for other values.
package ssa
import (
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/src"
)
// needwb returns whether we need write barrier for store op v.
// v must be Store/Move/Zero.
func needwb(v *Value) bool {
- t, ok := v.Aux.(Type)
+ t, ok := v.Aux.(*types.Type)
if !ok {
v.Fatalf("store aux is not a type: %s", v.LongString())
}
// set up control flow for write barrier test
// load word, test word, avoiding partial register write from load byte.
- types := &f.Config.Types
- flag := b.NewValue2(pos, OpLoad, types.UInt32, wbaddr, mem)
- flag = b.NewValue2(pos, OpNeq32, types.Bool, flag, const0)
+ cfgtypes := &f.Config.Types
+ flag := b.NewValue2(pos, OpLoad, cfgtypes.UInt32, wbaddr, mem)
+ flag = b.NewValue2(pos, OpNeq32, cfgtypes.Bool, flag, const0)
b.Kind = BlockIf
b.SetControl(flag)
b.Likely = BranchUnlikely
case OpMoveWB:
fn = typedmemmove
val = w.Args[1]
- typ = &ExternSymbol{Sym: w.Aux.(Type).Symbol()}
+ typ = &ExternSymbol{Sym: w.Aux.(*types.Type).Symbol()}
case OpZeroWB:
fn = typedmemclr
- typ = &ExternSymbol{Sym: w.Aux.(Type).Symbol()}
+ typ = &ExternSymbol{Sym: w.Aux.(*types.Type).Symbol()}
}
// then block: emit write barrier call
// else block: normal store
switch w.Op {
case OpStoreWB:
- memElse = bElse.NewValue3A(pos, OpStore, TypeMem, w.Aux, ptr, val, memElse)
+ memElse = bElse.NewValue3A(pos, OpStore, types.TypeMem, w.Aux, ptr, val, memElse)
case OpMoveWB:
- memElse = bElse.NewValue3I(pos, OpMove, TypeMem, w.AuxInt, ptr, val, memElse)
+ memElse = bElse.NewValue3I(pos, OpMove, types.TypeMem, w.AuxInt, ptr, val, memElse)
memElse.Aux = w.Aux
case OpZeroWB:
- memElse = bElse.NewValue2I(pos, OpZero, TypeMem, w.AuxInt, ptr, memElse)
+ memElse = bElse.NewValue2I(pos, OpZero, types.TypeMem, w.AuxInt, ptr, memElse)
memElse.Aux = w.Aux
}
bEnd.Values = append(bEnd.Values, last)
last.Block = bEnd
last.reset(OpPhi)
- last.Type = TypeMem
+ last.Type = types.TypeMem
last.AddArg(memThen)
last.AddArg(memElse)
for _, w := range stores {
t := val.Type.ElemType()
tmp = b.Func.fe.Auto(val.Pos, t)
aux := &AutoSymbol{Node: tmp}
- mem = b.NewValue1A(pos, OpVarDef, TypeMem, tmp, mem)
+ mem = b.NewValue1A(pos, OpVarDef, types.TypeMem, tmp, mem)
tmpaddr := b.NewValue1A(pos, OpAddr, t.PtrTo(), aux, sp)
siz := t.Size()
- mem = b.NewValue3I(pos, OpMove, TypeMem, siz, tmpaddr, val, mem)
+ mem = b.NewValue3I(pos, OpMove, types.TypeMem, siz, tmpaddr, val, mem)
mem.Aux = t
val = tmpaddr
}
taddr := b.NewValue1A(pos, OpAddr, b.Func.Config.Types.Uintptr, typ, sb)
off = round(off, taddr.Type.Alignment())
arg := b.NewValue1I(pos, OpOffPtr, taddr.Type.PtrTo(), off, sp)
- mem = b.NewValue3A(pos, OpStore, TypeMem, ptr.Type, arg, taddr, mem)
+ mem = b.NewValue3A(pos, OpStore, types.TypeMem, ptr.Type, arg, taddr, mem)
off += taddr.Type.Size()
}
off = round(off, ptr.Type.Alignment())
arg := b.NewValue1I(pos, OpOffPtr, ptr.Type.PtrTo(), off, sp)
- mem = b.NewValue3A(pos, OpStore, TypeMem, ptr.Type, arg, ptr, mem)
+ mem = b.NewValue3A(pos, OpStore, types.TypeMem, ptr.Type, arg, ptr, mem)
off += ptr.Type.Size()
if val != nil {
off = round(off, val.Type.Alignment())
arg = b.NewValue1I(pos, OpOffPtr, val.Type.PtrTo(), off, sp)
- mem = b.NewValue3A(pos, OpStore, TypeMem, val.Type, arg, val, mem)
+ mem = b.NewValue3A(pos, OpStore, types.TypeMem, val.Type, arg, val, mem)
off += val.Type.Size()
}
off = round(off, config.PtrSize)
// issue call
- mem = b.NewValue1A(pos, OpStaticCall, TypeMem, fn, mem)
+ mem = b.NewValue1A(pos, OpStaticCall, types.TypeMem, fn, mem)
mem.AuxInt = off - config.ctxt.FixedFrameSize()
if valIsVolatile {
- mem = b.NewValue1A(pos, OpVarKill, TypeMem, tmp, mem) // mark temp dead
+ mem = b.NewValue1A(pos, OpVarKill, types.TypeMem, tmp, mem) // mark temp dead
}
return mem
package ssa
-import "testing"
+import (
+ "cmd/compile/internal/types"
+ "testing"
+)
func TestWriteBarrierStoreOrder(t *testing.T) {
// Make sure writebarrier phase works even StoreWB ops are not in dependency order
c := testConfig(t)
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("sp", OpSP, TypeInvalid, 0, nil),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("sp", OpSP, types.TypeInvalid, 0, nil),
Valu("v", OpConstNil, ptrType, 0, nil),
Valu("addr1", OpAddr, ptrType, 0, nil, "sb"),
- Valu("wb2", OpStore, TypeMem, 0, ptrType, "addr1", "v", "wb1"),
- Valu("wb1", OpStore, TypeMem, 0, ptrType, "addr1", "v", "start"), // wb1 and wb2 are out of order
+ Valu("wb2", OpStore, types.TypeMem, 0, ptrType, "addr1", "v", "wb1"),
+ Valu("wb1", OpStore, types.TypeMem, 0, ptrType, "addr1", "v", "start"), // wb1 and wb2 are out of order
Goto("exit")),
Bloc("exit",
Exit("wb2")))
// a Phi op takes the store in the same block as argument.
// See issue #19067.
c := testConfig(t)
- ptrType := &TypeImpl{Size_: 8, Ptr: true, Name: "testptr"} // dummy for testing
+ ptrType := c.config.Types.BytePtr
fun := c.Fun("entry",
Bloc("entry",
- Valu("start", OpInitMem, TypeMem, 0, nil),
- Valu("sb", OpSB, TypeInvalid, 0, nil),
- Valu("sp", OpSP, TypeInvalid, 0, nil),
+ Valu("start", OpInitMem, types.TypeMem, 0, nil),
+ Valu("sb", OpSB, types.TypeInvalid, 0, nil),
+ Valu("sp", OpSP, types.TypeInvalid, 0, nil),
Goto("loop")),
Bloc("loop",
- Valu("phi", OpPhi, TypeMem, 0, nil, "start", "wb"),
+ Valu("phi", OpPhi, types.TypeMem, 0, nil, "start", "wb"),
Valu("v", OpConstNil, ptrType, 0, nil),
Valu("addr", OpAddr, ptrType, 0, nil, "sb"),
- Valu("wb", OpStore, TypeMem, 0, ptrType, "addr", "v", "phi"), // has write barrier
+ Valu("wb", OpStore, types.TypeMem, 0, ptrType, "addr", "v", "phi"), // has write barrier
Goto("loop")))
CheckFunc(fun.f)
package ssa
+import "cmd/compile/internal/types"
+
// zcse does an initial pass of common-subexpression elimination on the
// function for values with zero arguments to allow the more expensive cse
// to begin with a reduced number of values. Values are just relinked,
op Op
ai int64 // aux int
ax interface{} // aux
- t Type // type
+ t *types.Type // type
}
// keyFor returns the AuxInt portion of a key structure uniquely identifying a
package types
import (
- "cmd/compile/internal/ssa"
"cmd/internal/obj"
"cmd/internal/src"
"fmt"
// pseudo-types for import/export
TDDDFIELD // wrapper: contained type is a ... field
+ // SSA backend types
+ TSSA // internal types used by SSA backend (flags, memory, etc.)
+ TTUPLE // a pair of types, used by SSA backend
+
NTYPE
)
return t.Extra.(*Chan)
}
+type Tuple struct {
+ first *Type
+ second *Type
+ // Any tuple with a memory type must put that memory type second.
+}
+
// Array contains Type fields specific to array types.
type Array struct {
Elem *Type // element type
t.Extra = DDDField{}
case TCHAN:
t.Extra = new(Chan)
+ case TTUPLE:
+ t.Extra = new(Tuple)
}
return t
}
return t
}
+func NewTuple(t1, t2 *Type) *Type {
+ t := New(TTUPLE)
+ t.Extra.(*Tuple).first = t1
+ t.Extra.(*Tuple).second = t2
+ return t
+}
+
+func newSSA(name string) *Type {
+ t := New(TSSA)
+ t.Extra = name
+ return t
+}
+
// NewMap returns a new map Type with key type k and element (aka value) type v.
func NewMap(k, v *Type) *Type {
t := New(TMAP)
case TARRAY:
x := *t.Extra.(*Array)
nt.Extra = &x
+ case TTUPLE, TSSA:
+ Fatalf("ssa types cannot be copied")
}
// TODO(mdempsky): Find out why this is necessary and explain.
if t.Orig == t {
}
func (t *Type) Size() int64 {
+ if t.Etype == TSSA {
+ if t == TypeInt128 {
+ return 16
+ }
+ return 0
+ }
Dowidth(t)
return t.Width
}
return t.Etype.String()
}
+// Cmp is a comparison between values a and b.
+// -1 if a < b
+// 0 if a == b
+// 1 if a > b
+type Cmp int8
+
+const (
+ CMPlt = Cmp(-1)
+ CMPeq = Cmp(0)
+ CMPgt = Cmp(1)
+)
+
// Compare compares types for purposes of the SSA back
-// end, returning an ssa.Cmp (one of CMPlt, CMPeq, CMPgt).
+// end, returning a Cmp (one of CMPlt, CMPeq, CMPgt).
// The answers are correct for an optimizer
// or code generator, but not necessarily typechecking.
// The order chosen is arbitrary, only consistency and division
// into equivalence classes (Types that compare CMPeq) matters.
-func (t *Type) Compare(u ssa.Type) ssa.Cmp {
- x, ok := u.(*Type)
- // ssa.CompilerType is smaller than gc.Type
- // bare pointer equality is easy.
- if !ok {
- return ssa.CMPgt
- }
+func (t *Type) Compare(x *Type) Cmp {
if x == t {
- return ssa.CMPeq
+ return CMPeq
}
return t.cmp(x)
}
-func cmpForNe(x bool) ssa.Cmp {
+func cmpForNe(x bool) Cmp {
if x {
- return ssa.CMPlt
+ return CMPlt
}
- return ssa.CMPgt
+ return CMPgt
}
-func (r *Sym) cmpsym(s *Sym) ssa.Cmp {
+func (r *Sym) cmpsym(s *Sym) Cmp {
if r == s {
- return ssa.CMPeq
+ return CMPeq
}
if r == nil {
- return ssa.CMPlt
+ return CMPlt
}
if s == nil {
- return ssa.CMPgt
+ return CMPgt
}
// Fast sort, not pretty sort
if len(r.Name) != len(s.Name) {
if r.Name != s.Name {
return cmpForNe(r.Name < s.Name)
}
- return ssa.CMPeq
+ return CMPeq
}
-// cmp compares two *Types t and x, returning ssa.CMPlt,
-// ssa.CMPeq, ssa.CMPgt as t<x, t==x, t>x, for an arbitrary
+// cmp compares two *Types t and x, returning CMPlt,
+// CMPeq, CMPgt as t<x, t==x, t>x, for an arbitrary
// and optimizer-centric notion of comparison.
// TODO(josharian): make this safe for recursive interface types
// and use in signatlist sorting. See issue 19869.
-func (t *Type) cmp(x *Type) ssa.Cmp {
+func (t *Type) cmp(x *Type) Cmp {
// This follows the structure of eqtype in subr.go
// with two exceptions.
// 1. Symbols are compared more carefully because a <,=,> result is desired.
// 2. Maps are treated specially to avoid endless recursion -- maps
// contain an internal data type not expressible in Go source code.
if t == x {
- return ssa.CMPeq
+ return CMPeq
}
if t == nil {
- return ssa.CMPlt
+ return CMPlt
}
if x == nil {
- return ssa.CMPgt
+ return CMPgt
}
if t.Etype != x.Etype {
switch t.Etype {
case TUINT8:
if (t == Types[TUINT8] || t == Bytetype) && (x == Types[TUINT8] || x == Bytetype) {
- return ssa.CMPeq
+ return CMPeq
}
case TINT32:
if (t == Types[Runetype.Etype] || t == Runetype) && (x == Types[Runetype.Etype] || x == Runetype) {
- return ssa.CMPeq
+ return CMPeq
}
}
}
- if c := t.Sym.cmpsym(x.Sym); c != ssa.CMPeq {
+ if c := t.Sym.cmpsym(x.Sym); c != CMPeq {
return c
}
if t.Vargen != x.Vargen {
return cmpForNe(t.Vargen < x.Vargen)
}
- return ssa.CMPeq
+ return CMPeq
}
// both syms nil, look at structure below.
switch t.Etype {
case TBOOL, TFLOAT32, TFLOAT64, TCOMPLEX64, TCOMPLEX128, TUNSAFEPTR, TUINTPTR,
TINT8, TINT16, TINT32, TINT64, TINT, TUINT8, TUINT16, TUINT32, TUINT64, TUINT:
- return ssa.CMPeq
- }
+ return CMPeq
+
+ case TSSA:
+ tname := t.Extra.(string)
+ xname := t.Extra.(string)
+ // desire fast sorting, not pretty sorting.
+ if len(tname) == len(xname) {
+ if tname == xname {
+ return CMPeq
+ }
+ if tname < xname {
+ return CMPlt
+ }
+ return CMPgt
+ }
+ if len(tname) > len(xname) {
+ return CMPgt
+ }
+ return CMPlt
+
+ case TTUPLE:
+ xtup := x.Extra.(*Tuple)
+ ttup := t.Extra.(*Tuple)
+ if c := ttup.first.Compare(xtup.first); c != CMPeq {
+ return c
+ }
+ return ttup.second.Compare(xtup.second)
- switch t.Etype {
case TMAP:
- if c := t.Key().cmp(x.Key()); c != ssa.CMPeq {
+ if c := t.Key().cmp(x.Key()); c != CMPeq {
return c
}
return t.Val().cmp(x.Val())
case TSTRUCT:
if t.StructType().Map == nil {
if x.StructType().Map != nil {
- return ssa.CMPlt // nil < non-nil
+ return CMPlt // nil < non-nil
}
// to the fallthrough
} else if x.StructType().Map == nil {
- return ssa.CMPgt // nil > non-nil
+ return CMPgt // nil > non-nil
} else if t.StructType().Map.MapType().Bucket == t {
// Both have non-nil Map
// Special case for Maps which include a recursive type where the recursion is not broken with a named type
if x.StructType().Map.MapType().Bucket != x {
- return ssa.CMPlt // bucket maps are least
+ return CMPlt // bucket maps are least
}
return t.StructType().Map.cmp(x.StructType().Map)
} else if x.StructType().Map.MapType().Bucket == x {
- return ssa.CMPgt // bucket maps are least
+ return CMPgt // bucket maps are least
} // If t != t.Map.Bucket, fall through to general case
tfs := t.FieldSlice()
if t1.Note != x1.Note {
return cmpForNe(t1.Note < x1.Note)
}
- if c := t1.Sym.cmpsym(x1.Sym); c != ssa.CMPeq {
+ if c := t1.Sym.cmpsym(x1.Sym); c != CMPeq {
return c
}
- if c := t1.Type.cmp(x1.Type); c != ssa.CMPeq {
+ if c := t1.Type.cmp(x1.Type); c != CMPeq {
return c
}
}
if len(tfs) != len(xfs) {
return cmpForNe(len(tfs) < len(xfs))
}
- return ssa.CMPeq
+ return CMPeq
case TINTER:
tfs := t.FieldSlice()
xfs := x.FieldSlice()
for i := 0; i < len(tfs) && i < len(xfs); i++ {
t1, x1 := tfs[i], xfs[i]
- if c := t1.Sym.cmpsym(x1.Sym); c != ssa.CMPeq {
+ if c := t1.Sym.cmpsym(x1.Sym); c != CMPeq {
return c
}
- if c := t1.Type.cmp(x1.Type); c != ssa.CMPeq {
+ if c := t1.Type.cmp(x1.Type); c != CMPeq {
return c
}
}
if len(tfs) != len(xfs) {
return cmpForNe(len(tfs) < len(xfs))
}
- return ssa.CMPeq
+ return CMPeq
case TFUNC:
for _, f := range RecvsParamsResults {
if ta.Isddd() != tb.Isddd() {
return cmpForNe(!ta.Isddd())
}
- if c := ta.Type.cmp(tb.Type); c != ssa.CMPeq {
+ if c := ta.Type.cmp(tb.Type); c != CMPeq {
return c
}
}
return cmpForNe(len(tfs) < len(xfs))
}
}
- return ssa.CMPeq
+ return CMPeq
case TARRAY:
if t.NumElem() != x.NumElem() {
return t.IsInterface() && t.NumFields() == 0
}
-func (t *Type) ElemType() ssa.Type {
+func (t *Type) ElemType() *Type {
// TODO(josharian): If Type ever moves to a shared
// internal package, remove this silly wrapper.
return t.Elem()
}
-func (t *Type) PtrTo() ssa.Type {
+func (t *Type) PtrTo() *Type {
return NewPtr(t)
}
func (t *Type) NumFields() int {
return t.Fields().Len()
}
-func (t *Type) FieldType(i int) ssa.Type {
+func (t *Type) FieldType(i int) *Type {
+ if t.Etype == TTUPLE {
+ switch i {
+ case 0:
+ return t.Extra.(*Tuple).first
+ case 1:
+ return t.Extra.(*Tuple).second
+ default:
+ panic("bad tuple index")
+ }
+ }
return t.Field(i).Type
}
func (t *Type) FieldOff(i int) int64 {
return t.Extra.(*Chan).Dir
}
-func (t *Type) IsMemory() bool { return false }
-func (t *Type) IsFlags() bool { return false }
-func (t *Type) IsVoid() bool { return false }
-func (t *Type) IsTuple() bool { return false }
+func (t *Type) IsMemory() bool { return t == TypeMem }
+func (t *Type) IsFlags() bool { return t == TypeFlags }
+func (t *Type) IsVoid() bool { return t == TypeVoid }
+func (t *Type) IsTuple() bool { return t.Etype == TTUPLE }
// IsUntyped reports whether t is an untyped type.
func (t *Type) IsUntyped() bool {
}
return recvType
}
+
+var (
+ TypeInvalid *Type = newSSA("invalid")
+ TypeMem *Type = newSSA("mem")
+ TypeFlags *Type = newSSA("flags")
+ TypeVoid *Type = newSSA("void")
+ TypeInt128 *Type = newSSA("int128")
+)
TFUNCARGS: "TFUNCARGS",
TCHANARGS: "TCHANARGS",
TDDDFIELD: "TDDDFIELD",
+ TSSA: "TSSA",
+ TTUPLE: "TTUPLE",
}
func (et EType) String() string {
import (
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/x86"
"math"
}
// loadPush returns the opcode for load+push of the given type.
-func loadPush(t ssa.Type) obj.As {
+func loadPush(t *types.Type) obj.As {
if t.Size() == 4 {
return x86.AFMOVF
}
"cmd/compile/internal/gc"
"cmd/compile/internal/ssa"
+ "cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/x86"
)
}
// loadByType returns the load instruction of the given type.
-func loadByType(t ssa.Type) obj.As {
+func loadByType(t *types.Type) obj.As {
// Avoid partial register write
if !t.IsFloat() && t.Size() <= 2 {
if t.Size() == 1 {
}
// storeByType returns the store instruction of the given type.
-func storeByType(t ssa.Type) obj.As {
+func storeByType(t *types.Type) obj.As {
width := t.Size()
if t.IsFloat() {
switch width {
}
// moveByType returns the reg->reg move instruction of the given type.
-func moveByType(t ssa.Type) obj.As {
+func moveByType(t *types.Type) obj.As {
if t.IsFloat() {
switch t.Size() {
case 4: