LAPACK  3.9.0
LAPACK: Linear Algebra PACKage

◆ sgghd3()

subroutine sgghd3 ( character  COMPQ,
character  COMPZ,
integer  N,
integer  ILO,
integer  IHI,
real, dimension( lda, * )  A,
integer  LDA,
real, dimension( ldb, * )  B,
integer  LDB,
real, dimension( ldq, * )  Q,
integer  LDQ,
real, dimension( ldz, * )  Z,
integer  LDZ,
real, dimension( * )  WORK,
integer  LWORK,
integer  INFO 
)

SGGHD3

Download SGGHRD + dependencies [TGZ] [ZIP] [TXT]

Purpose:
 SGGHD3 reduces a pair of real matrices (A,B) to generalized upper
 Hessenberg form using orthogonal transformations, where A is a
 general matrix and B is upper triangular.  The form of the
 generalized eigenvalue problem is
    A*x = lambda*B*x,
 and B is typically made upper triangular by computing its QR
 factorization and moving the orthogonal matrix Q to the left side
 of the equation.

 This subroutine simultaneously reduces A to a Hessenberg matrix H:
    Q**T*A*Z = H
 and transforms B to another upper triangular matrix T:
    Q**T*B*Z = T
 in order to reduce the problem to its standard form
    H*y = lambda*T*y
 where y = Z**T*x.

 The orthogonal matrices Q and Z are determined as products of Givens
 rotations.  They may either be formed explicitly, or they may be
 postmultiplied into input matrices Q1 and Z1, so that

      Q1 * A * Z1**T = (Q1*Q) * H * (Z1*Z)**T

      Q1 * B * Z1**T = (Q1*Q) * T * (Z1*Z)**T

 If Q1 is the orthogonal matrix from the QR factorization of B in the
 original equation A*x = lambda*B*x, then SGGHD3 reduces the original
 problem to generalized Hessenberg form.

 This is a blocked variant of SGGHRD, using matrix-matrix
 multiplications for parts of the computation to enhance performance.
Parameters
[in]COMPQ
          COMPQ is CHARACTER*1
          = 'N': do not compute Q;
          = 'I': Q is initialized to the unit matrix, and the
                 orthogonal matrix Q is returned;
          = 'V': Q must contain an orthogonal matrix Q1 on entry,
                 and the product Q1*Q is returned.
[in]COMPZ
          COMPZ is CHARACTER*1
          = 'N': do not compute Z;
          = 'I': Z is initialized to the unit matrix, and the
                 orthogonal matrix Z is returned;
          = 'V': Z must contain an orthogonal matrix Z1 on entry,
                 and the product Z1*Z is returned.
[in]N
          N is INTEGER
          The order of the matrices A and B.  N >= 0.
[in]ILO
          ILO is INTEGER
[in]IHI
          IHI is INTEGER

          ILO and IHI mark the rows and columns of A which are to be
          reduced.  It is assumed that A is already upper triangular
          in rows and columns 1:ILO-1 and IHI+1:N.  ILO and IHI are
          normally set by a previous call to SGGBAL; otherwise they
          should be set to 1 and N respectively.
          1 <= ILO <= IHI <= N, if N > 0; ILO=1 and IHI=0, if N=0.
[in,out]A
          A is REAL array, dimension (LDA, N)
          On entry, the N-by-N general matrix to be reduced.
          On exit, the upper triangle and the first subdiagonal of A
          are overwritten with the upper Hessenberg matrix H, and the
          rest is set to zero.
[in]LDA
          LDA is INTEGER
          The leading dimension of the array A.  LDA >= max(1,N).
[in,out]B
          B is REAL array, dimension (LDB, N)
          On entry, the N-by-N upper triangular matrix B.
          On exit, the upper triangular matrix T = Q**T B Z.  The
          elements below the diagonal are set to zero.
[in]LDB
          LDB is INTEGER
          The leading dimension of the array B.  LDB >= max(1,N).
[in,out]Q
          Q is REAL array, dimension (LDQ, N)
          On entry, if COMPQ = 'V', the orthogonal matrix Q1,
          typically from the QR factorization of B.
          On exit, if COMPQ='I', the orthogonal matrix Q, and if
          COMPQ = 'V', the product Q1*Q.
          Not referenced if COMPQ='N'.
[in]LDQ
          LDQ is INTEGER
          The leading dimension of the array Q.
          LDQ >= N if COMPQ='V' or 'I'; LDQ >= 1 otherwise.
[in,out]Z
          Z is REAL array, dimension (LDZ, N)
          On entry, if COMPZ = 'V', the orthogonal matrix Z1.
          On exit, if COMPZ='I', the orthogonal matrix Z, and if
          COMPZ = 'V', the product Z1*Z.
          Not referenced if COMPZ='N'.
[in]LDZ
          LDZ is INTEGER
          The leading dimension of the array Z.
          LDZ >= N if COMPZ='V' or 'I'; LDZ >= 1 otherwise.
[out]WORK
          WORK is REAL array, dimension (LWORK)
          On exit, if INFO = 0, WORK(1) returns the optimal LWORK.
[in]LWORK
          LWORK is INTEGER
          The length of the array WORK.  LWORK >= 1.
          For optimum performance LWORK >= 6*N*NB, where NB is the
          optimal blocksize.

          If LWORK = -1, then a workspace query is assumed; the routine
          only calculates the optimal size of the WORK array, returns
          this value as the first entry of the WORK array, and no error
          message related to LWORK is issued by XERBLA.
[out]INFO
          INFO is INTEGER
          = 0:  successful exit.
          < 0:  if INFO = -i, the i-th argument had an illegal value.
Author
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
Date
January 2015
Further Details:
  This routine reduces A to Hessenberg form and maintains B in
  using a blocked variant of Moler and Stewart's original algorithm,
  as described by Kagstrom, Kressner, Quintana-Orti, and Quintana-Orti
  (BIT 2008).

Definition at line 232 of file sgghd3.f.

232 *
233 * -- LAPACK computational routine (version 3.8.0) --
234 * -- LAPACK is a software package provided by Univ. of Tennessee, --
235 * -- Univ. of California Berkeley, Univ. of Colorado Denver and NAG Ltd..--
236 * January 2015
237 *
238  IMPLICIT NONE
239 *
240 * .. Scalar Arguments ..
241  CHARACTER COMPQ, COMPZ
242  INTEGER IHI, ILO, INFO, LDA, LDB, LDQ, LDZ, N, LWORK
243 * ..
244 * .. Array Arguments ..
245  REAL A( LDA, * ), B( LDB, * ), Q( LDQ, * ),
246  $ Z( LDZ, * ), WORK( * )
247 * ..
248 *
249 * =====================================================================
250 *
251 * .. Parameters ..
252  REAL ZERO, ONE
253  parameter( zero = 0.0e+0, one = 1.0e+0 )
254 * ..
255 * .. Local Scalars ..
256  LOGICAL BLK22, INITQ, INITZ, LQUERY, WANTQ, WANTZ
257  CHARACTER*1 COMPQ2, COMPZ2
258  INTEGER COLA, I, IERR, J, J0, JCOL, JJ, JROW, K,
259  $ KACC22, LEN, LWKOPT, N2NB, NB, NBLST, NBMIN,
260  $ NH, NNB, NX, PPW, PPWO, PW, TOP, TOPQ
261  REAL C, C1, C2, S, S1, S2, TEMP, TEMP1, TEMP2, TEMP3
262 * ..
263 * .. External Functions ..
264  LOGICAL LSAME
265  INTEGER ILAENV
266  EXTERNAL ilaenv, lsame
267 * ..
268 * .. External Subroutines ..
269  EXTERNAL sgghrd, slartg, slaset, sorm22, srot, sgemm,
270  $ sgemv, strmv, slacpy, xerbla
271 * ..
272 * .. Intrinsic Functions ..
273  INTRINSIC real, max
274 * ..
275 * .. Executable Statements ..
276 *
277 * Decode and test the input parameters.
278 *
279  info = 0
280  nb = ilaenv( 1, 'SGGHD3', ' ', n, ilo, ihi, -1 )
281  lwkopt = max( 6*n*nb, 1 )
282  work( 1 ) = real( lwkopt )
283  initq = lsame( compq, 'I' )
284  wantq = initq .OR. lsame( compq, 'V' )
285  initz = lsame( compz, 'I' )
286  wantz = initz .OR. lsame( compz, 'V' )
287  lquery = ( lwork.EQ.-1 )
288 *
289  IF( .NOT.lsame( compq, 'N' ) .AND. .NOT.wantq ) THEN
290  info = -1
291  ELSE IF( .NOT.lsame( compz, 'N' ) .AND. .NOT.wantz ) THEN
292  info = -2
293  ELSE IF( n.LT.0 ) THEN
294  info = -3
295  ELSE IF( ilo.LT.1 ) THEN
296  info = -4
297  ELSE IF( ihi.GT.n .OR. ihi.LT.ilo-1 ) THEN
298  info = -5
299  ELSE IF( lda.LT.max( 1, n ) ) THEN
300  info = -7
301  ELSE IF( ldb.LT.max( 1, n ) ) THEN
302  info = -9
303  ELSE IF( ( wantq .AND. ldq.LT.n ) .OR. ldq.LT.1 ) THEN
304  info = -11
305  ELSE IF( ( wantz .AND. ldz.LT.n ) .OR. ldz.LT.1 ) THEN
306  info = -13
307  ELSE IF( lwork.LT.1 .AND. .NOT.lquery ) THEN
308  info = -15
309  END IF
310  IF( info.NE.0 ) THEN
311  CALL xerbla( 'SGGHD3', -info )
312  RETURN
313  ELSE IF( lquery ) THEN
314  RETURN
315  END IF
316 *
317 * Initialize Q and Z if desired.
318 *
319  IF( initq )
320  $ CALL slaset( 'All', n, n, zero, one, q, ldq )
321  IF( initz )
322  $ CALL slaset( 'All', n, n, zero, one, z, ldz )
323 *
324 * Zero out lower triangle of B.
325 *
326  IF( n.GT.1 )
327  $ CALL slaset( 'Lower', n-1, n-1, zero, zero, b(2, 1), ldb )
328 *
329 * Quick return if possible
330 *
331  nh = ihi - ilo + 1
332  IF( nh.LE.1 ) THEN
333  work( 1 ) = one
334  RETURN
335  END IF
336 *
337 * Determine the blocksize.
338 *
339  nbmin = ilaenv( 2, 'SGGHD3', ' ', n, ilo, ihi, -1 )
340  IF( nb.GT.1 .AND. nb.LT.nh ) THEN
341 *
342 * Determine when to use unblocked instead of blocked code.
343 *
344  nx = max( nb, ilaenv( 3, 'SGGHD3', ' ', n, ilo, ihi, -1 ) )
345  IF( nx.LT.nh ) THEN
346 *
347 * Determine if workspace is large enough for blocked code.
348 *
349  IF( lwork.LT.lwkopt ) THEN
350 *
351 * Not enough workspace to use optimal NB: determine the
352 * minimum value of NB, and reduce NB or force use of
353 * unblocked code.
354 *
355  nbmin = max( 2, ilaenv( 2, 'SGGHD3', ' ', n, ilo, ihi,
356  $ -1 ) )
357  IF( lwork.GE.6*n*nbmin ) THEN
358  nb = lwork / ( 6*n )
359  ELSE
360  nb = 1
361  END IF
362  END IF
363  END IF
364  END IF
365 *
366  IF( nb.LT.nbmin .OR. nb.GE.nh ) THEN
367 *
368 * Use unblocked code below
369 *
370  jcol = ilo
371 *
372  ELSE
373 *
374 * Use blocked code
375 *
376  kacc22 = ilaenv( 16, 'SGGHD3', ' ', n, ilo, ihi, -1 )
377  blk22 = kacc22.EQ.2
378  DO jcol = ilo, ihi-2, nb
379  nnb = min( nb, ihi-jcol-1 )
380 *
381 * Initialize small orthogonal factors that will hold the
382 * accumulated Givens rotations in workspace.
383 * N2NB denotes the number of 2*NNB-by-2*NNB factors
384 * NBLST denotes the (possibly smaller) order of the last
385 * factor.
386 *
387  n2nb = ( ihi-jcol-1 ) / nnb - 1
388  nblst = ihi - jcol - n2nb*nnb
389  CALL slaset( 'All', nblst, nblst, zero, one, work, nblst )
390  pw = nblst * nblst + 1
391  DO i = 1, n2nb
392  CALL slaset( 'All', 2*nnb, 2*nnb, zero, one,
393  $ work( pw ), 2*nnb )
394  pw = pw + 4*nnb*nnb
395  END DO
396 *
397 * Reduce columns JCOL:JCOL+NNB-1 of A to Hessenberg form.
398 *
399  DO j = jcol, jcol+nnb-1
400 *
401 * Reduce Jth column of A. Store cosines and sines in Jth
402 * column of A and B, respectively.
403 *
404  DO i = ihi, j+2, -1
405  temp = a( i-1, j )
406  CALL slartg( temp, a( i, j ), c, s, a( i-1, j ) )
407  a( i, j ) = c
408  b( i, j ) = s
409  END DO
410 *
411 * Accumulate Givens rotations into workspace array.
412 *
413  ppw = ( nblst + 1 )*( nblst - 2 ) - j + jcol + 1
414  len = 2 + j - jcol
415  jrow = j + n2nb*nnb + 2
416  DO i = ihi, jrow, -1
417  c = a( i, j )
418  s = b( i, j )
419  DO jj = ppw, ppw+len-1
420  temp = work( jj + nblst )
421  work( jj + nblst ) = c*temp - s*work( jj )
422  work( jj ) = s*temp + c*work( jj )
423  END DO
424  len = len + 1
425  ppw = ppw - nblst - 1
426  END DO
427 *
428  ppwo = nblst*nblst + ( nnb+j-jcol-1 )*2*nnb + nnb
429  j0 = jrow - nnb
430  DO jrow = j0, j+2, -nnb
431  ppw = ppwo
432  len = 2 + j - jcol
433  DO i = jrow+nnb-1, jrow, -1
434  c = a( i, j )
435  s = b( i, j )
436  DO jj = ppw, ppw+len-1
437  temp = work( jj + 2*nnb )
438  work( jj + 2*nnb ) = c*temp - s*work( jj )
439  work( jj ) = s*temp + c*work( jj )
440  END DO
441  len = len + 1
442  ppw = ppw - 2*nnb - 1
443  END DO
444  ppwo = ppwo + 4*nnb*nnb
445  END DO
446 *
447 * TOP denotes the number of top rows in A and B that will
448 * not be updated during the next steps.
449 *
450  IF( jcol.LE.2 ) THEN
451  top = 0
452  ELSE
453  top = jcol
454  END IF
455 *
456 * Propagate transformations through B and replace stored
457 * left sines/cosines by right sines/cosines.
458 *
459  DO jj = n, j+1, -1
460 *
461 * Update JJth column of B.
462 *
463  DO i = min( jj+1, ihi ), j+2, -1
464  c = a( i, j )
465  s = b( i, j )
466  temp = b( i, jj )
467  b( i, jj ) = c*temp - s*b( i-1, jj )
468  b( i-1, jj ) = s*temp + c*b( i-1, jj )
469  END DO
470 *
471 * Annihilate B( JJ+1, JJ ).
472 *
473  IF( jj.LT.ihi ) THEN
474  temp = b( jj+1, jj+1 )
475  CALL slartg( temp, b( jj+1, jj ), c, s,
476  $ b( jj+1, jj+1 ) )
477  b( jj+1, jj ) = zero
478  CALL srot( jj-top, b( top+1, jj+1 ), 1,
479  $ b( top+1, jj ), 1, c, s )
480  a( jj+1, j ) = c
481  b( jj+1, j ) = -s
482  END IF
483  END DO
484 *
485 * Update A by transformations from right.
486 * Explicit loop unrolling provides better performance
487 * compared to SLASR.
488 * CALL SLASR( 'Right', 'Variable', 'Backward', IHI-TOP,
489 * $ IHI-J, A( J+2, J ), B( J+2, J ),
490 * $ A( TOP+1, J+1 ), LDA )
491 *
492  jj = mod( ihi-j-1, 3 )
493  DO i = ihi-j-3, jj+1, -3
494  c = a( j+1+i, j )
495  s = -b( j+1+i, j )
496  c1 = a( j+2+i, j )
497  s1 = -b( j+2+i, j )
498  c2 = a( j+3+i, j )
499  s2 = -b( j+3+i, j )
500 *
501  DO k = top+1, ihi
502  temp = a( k, j+i )
503  temp1 = a( k, j+i+1 )
504  temp2 = a( k, j+i+2 )
505  temp3 = a( k, j+i+3 )
506  a( k, j+i+3 ) = c2*temp3 + s2*temp2
507  temp2 = -s2*temp3 + c2*temp2
508  a( k, j+i+2 ) = c1*temp2 + s1*temp1
509  temp1 = -s1*temp2 + c1*temp1
510  a( k, j+i+1 ) = c*temp1 + s*temp
511  a( k, j+i ) = -s*temp1 + c*temp
512  END DO
513  END DO
514 *
515  IF( jj.GT.0 ) THEN
516  DO i = jj, 1, -1
517  CALL srot( ihi-top, a( top+1, j+i+1 ), 1,
518  $ a( top+1, j+i ), 1, a( j+1+i, j ),
519  $ -b( j+1+i, j ) )
520  END DO
521  END IF
522 *
523 * Update (J+1)th column of A by transformations from left.
524 *
525  IF ( j .LT. jcol + nnb - 1 ) THEN
526  len = 1 + j - jcol
527 *
528 * Multiply with the trailing accumulated orthogonal
529 * matrix, which takes the form
530 *
531 * [ U11 U12 ]
532 * U = [ ],
533 * [ U21 U22 ]
534 *
535 * where U21 is a LEN-by-LEN matrix and U12 is lower
536 * triangular.
537 *
538  jrow = ihi - nblst + 1
539  CALL sgemv( 'Transpose', nblst, len, one, work,
540  $ nblst, a( jrow, j+1 ), 1, zero,
541  $ work( pw ), 1 )
542  ppw = pw + len
543  DO i = jrow, jrow+nblst-len-1
544  work( ppw ) = a( i, j+1 )
545  ppw = ppw + 1
546  END DO
547  CALL strmv( 'Lower', 'Transpose', 'Non-unit',
548  $ nblst-len, work( len*nblst + 1 ), nblst,
549  $ work( pw+len ), 1 )
550  CALL sgemv( 'Transpose', len, nblst-len, one,
551  $ work( (len+1)*nblst - len + 1 ), nblst,
552  $ a( jrow+nblst-len, j+1 ), 1, one,
553  $ work( pw+len ), 1 )
554  ppw = pw
555  DO i = jrow, jrow+nblst-1
556  a( i, j+1 ) = work( ppw )
557  ppw = ppw + 1
558  END DO
559 *
560 * Multiply with the other accumulated orthogonal
561 * matrices, which take the form
562 *
563 * [ U11 U12 0 ]
564 * [ ]
565 * U = [ U21 U22 0 ],
566 * [ ]
567 * [ 0 0 I ]
568 *
569 * where I denotes the (NNB-LEN)-by-(NNB-LEN) identity
570 * matrix, U21 is a LEN-by-LEN upper triangular matrix
571 * and U12 is an NNB-by-NNB lower triangular matrix.
572 *
573  ppwo = 1 + nblst*nblst
574  j0 = jrow - nnb
575  DO jrow = j0, jcol+1, -nnb
576  ppw = pw + len
577  DO i = jrow, jrow+nnb-1
578  work( ppw ) = a( i, j+1 )
579  ppw = ppw + 1
580  END DO
581  ppw = pw
582  DO i = jrow+nnb, jrow+nnb+len-1
583  work( ppw ) = a( i, j+1 )
584  ppw = ppw + 1
585  END DO
586  CALL strmv( 'Upper', 'Transpose', 'Non-unit', len,
587  $ work( ppwo + nnb ), 2*nnb, work( pw ),
588  $ 1 )
589  CALL strmv( 'Lower', 'Transpose', 'Non-unit', nnb,
590  $ work( ppwo + 2*len*nnb ),
591  $ 2*nnb, work( pw + len ), 1 )
592  CALL sgemv( 'Transpose', nnb, len, one,
593  $ work( ppwo ), 2*nnb, a( jrow, j+1 ), 1,
594  $ one, work( pw ), 1 )
595  CALL sgemv( 'Transpose', len, nnb, one,
596  $ work( ppwo + 2*len*nnb + nnb ), 2*nnb,
597  $ a( jrow+nnb, j+1 ), 1, one,
598  $ work( pw+len ), 1 )
599  ppw = pw
600  DO i = jrow, jrow+len+nnb-1
601  a( i, j+1 ) = work( ppw )
602  ppw = ppw + 1
603  END DO
604  ppwo = ppwo + 4*nnb*nnb
605  END DO
606  END IF
607  END DO
608 *
609 * Apply accumulated orthogonal matrices to A.
610 *
611  cola = n - jcol - nnb + 1
612  j = ihi - nblst + 1
613  CALL sgemm( 'Transpose', 'No Transpose', nblst,
614  $ cola, nblst, one, work, nblst,
615  $ a( j, jcol+nnb ), lda, zero, work( pw ),
616  $ nblst )
617  CALL slacpy( 'All', nblst, cola, work( pw ), nblst,
618  $ a( j, jcol+nnb ), lda )
619  ppwo = nblst*nblst + 1
620  j0 = j - nnb
621  DO j = j0, jcol+1, -nnb
622  IF ( blk22 ) THEN
623 *
624 * Exploit the structure of
625 *
626 * [ U11 U12 ]
627 * U = [ ]
628 * [ U21 U22 ],
629 *
630 * where all blocks are NNB-by-NNB, U21 is upper
631 * triangular and U12 is lower triangular.
632 *
633  CALL sorm22( 'Left', 'Transpose', 2*nnb, cola, nnb,
634  $ nnb, work( ppwo ), 2*nnb,
635  $ a( j, jcol+nnb ), lda, work( pw ),
636  $ lwork-pw+1, ierr )
637  ELSE
638 *
639 * Ignore the structure of U.
640 *
641  CALL sgemm( 'Transpose', 'No Transpose', 2*nnb,
642  $ cola, 2*nnb, one, work( ppwo ), 2*nnb,
643  $ a( j, jcol+nnb ), lda, zero, work( pw ),
644  $ 2*nnb )
645  CALL slacpy( 'All', 2*nnb, cola, work( pw ), 2*nnb,
646  $ a( j, jcol+nnb ), lda )
647  END IF
648  ppwo = ppwo + 4*nnb*nnb
649  END DO
650 *
651 * Apply accumulated orthogonal matrices to Q.
652 *
653  IF( wantq ) THEN
654  j = ihi - nblst + 1
655  IF ( initq ) THEN
656  topq = max( 2, j - jcol + 1 )
657  nh = ihi - topq + 1
658  ELSE
659  topq = 1
660  nh = n
661  END IF
662  CALL sgemm( 'No Transpose', 'No Transpose', nh,
663  $ nblst, nblst, one, q( topq, j ), ldq,
664  $ work, nblst, zero, work( pw ), nh )
665  CALL slacpy( 'All', nh, nblst, work( pw ), nh,
666  $ q( topq, j ), ldq )
667  ppwo = nblst*nblst + 1
668  j0 = j - nnb
669  DO j = j0, jcol+1, -nnb
670  IF ( initq ) THEN
671  topq = max( 2, j - jcol + 1 )
672  nh = ihi - topq + 1
673  END IF
674  IF ( blk22 ) THEN
675 *
676 * Exploit the structure of U.
677 *
678  CALL sorm22( 'Right', 'No Transpose', nh, 2*nnb,
679  $ nnb, nnb, work( ppwo ), 2*nnb,
680  $ q( topq, j ), ldq, work( pw ),
681  $ lwork-pw+1, ierr )
682  ELSE
683 *
684 * Ignore the structure of U.
685 *
686  CALL sgemm( 'No Transpose', 'No Transpose', nh,
687  $ 2*nnb, 2*nnb, one, q( topq, j ), ldq,
688  $ work( ppwo ), 2*nnb, zero, work( pw ),
689  $ nh )
690  CALL slacpy( 'All', nh, 2*nnb, work( pw ), nh,
691  $ q( topq, j ), ldq )
692  END IF
693  ppwo = ppwo + 4*nnb*nnb
694  END DO
695  END IF
696 *
697 * Accumulate right Givens rotations if required.
698 *
699  IF ( wantz .OR. top.GT.0 ) THEN
700 *
701 * Initialize small orthogonal factors that will hold the
702 * accumulated Givens rotations in workspace.
703 *
704  CALL slaset( 'All', nblst, nblst, zero, one, work,
705  $ nblst )
706  pw = nblst * nblst + 1
707  DO i = 1, n2nb
708  CALL slaset( 'All', 2*nnb, 2*nnb, zero, one,
709  $ work( pw ), 2*nnb )
710  pw = pw + 4*nnb*nnb
711  END DO
712 *
713 * Accumulate Givens rotations into workspace array.
714 *
715  DO j = jcol, jcol+nnb-1
716  ppw = ( nblst + 1 )*( nblst - 2 ) - j + jcol + 1
717  len = 2 + j - jcol
718  jrow = j + n2nb*nnb + 2
719  DO i = ihi, jrow, -1
720  c = a( i, j )
721  a( i, j ) = zero
722  s = b( i, j )
723  b( i, j ) = zero
724  DO jj = ppw, ppw+len-1
725  temp = work( jj + nblst )
726  work( jj + nblst ) = c*temp - s*work( jj )
727  work( jj ) = s*temp + c*work( jj )
728  END DO
729  len = len + 1
730  ppw = ppw - nblst - 1
731  END DO
732 *
733  ppwo = nblst*nblst + ( nnb+j-jcol-1 )*2*nnb + nnb
734  j0 = jrow - nnb
735  DO jrow = j0, j+2, -nnb
736  ppw = ppwo
737  len = 2 + j - jcol
738  DO i = jrow+nnb-1, jrow, -1
739  c = a( i, j )
740  a( i, j ) = zero
741  s = b( i, j )
742  b( i, j ) = zero
743  DO jj = ppw, ppw+len-1
744  temp = work( jj + 2*nnb )
745  work( jj + 2*nnb ) = c*temp - s*work( jj )
746  work( jj ) = s*temp + c*work( jj )
747  END DO
748  len = len + 1
749  ppw = ppw - 2*nnb - 1
750  END DO
751  ppwo = ppwo + 4*nnb*nnb
752  END DO
753  END DO
754  ELSE
755 *
756  CALL slaset( 'Lower', ihi - jcol - 1, nnb, zero, zero,
757  $ a( jcol + 2, jcol ), lda )
758  CALL slaset( 'Lower', ihi - jcol - 1, nnb, zero, zero,
759  $ b( jcol + 2, jcol ), ldb )
760  END IF
761 *
762 * Apply accumulated orthogonal matrices to A and B.
763 *
764  IF ( top.GT.0 ) THEN
765  j = ihi - nblst + 1
766  CALL sgemm( 'No Transpose', 'No Transpose', top,
767  $ nblst, nblst, one, a( 1, j ), lda,
768  $ work, nblst, zero, work( pw ), top )
769  CALL slacpy( 'All', top, nblst, work( pw ), top,
770  $ a( 1, j ), lda )
771  ppwo = nblst*nblst + 1
772  j0 = j - nnb
773  DO j = j0, jcol+1, -nnb
774  IF ( blk22 ) THEN
775 *
776 * Exploit the structure of U.
777 *
778  CALL sorm22( 'Right', 'No Transpose', top, 2*nnb,
779  $ nnb, nnb, work( ppwo ), 2*nnb,
780  $ a( 1, j ), lda, work( pw ),
781  $ lwork-pw+1, ierr )
782  ELSE
783 *
784 * Ignore the structure of U.
785 *
786  CALL sgemm( 'No Transpose', 'No Transpose', top,
787  $ 2*nnb, 2*nnb, one, a( 1, j ), lda,
788  $ work( ppwo ), 2*nnb, zero,
789  $ work( pw ), top )
790  CALL slacpy( 'All', top, 2*nnb, work( pw ), top,
791  $ a( 1, j ), lda )
792  END IF
793  ppwo = ppwo + 4*nnb*nnb
794  END DO
795 *
796  j = ihi - nblst + 1
797  CALL sgemm( 'No Transpose', 'No Transpose', top,
798  $ nblst, nblst, one, b( 1, j ), ldb,
799  $ work, nblst, zero, work( pw ), top )
800  CALL slacpy( 'All', top, nblst, work( pw ), top,
801  $ b( 1, j ), ldb )
802  ppwo = nblst*nblst + 1
803  j0 = j - nnb
804  DO j = j0, jcol+1, -nnb
805  IF ( blk22 ) THEN
806 *
807 * Exploit the structure of U.
808 *
809  CALL sorm22( 'Right', 'No Transpose', top, 2*nnb,
810  $ nnb, nnb, work( ppwo ), 2*nnb,
811  $ b( 1, j ), ldb, work( pw ),
812  $ lwork-pw+1, ierr )
813  ELSE
814 *
815 * Ignore the structure of U.
816 *
817  CALL sgemm( 'No Transpose', 'No Transpose', top,
818  $ 2*nnb, 2*nnb, one, b( 1, j ), ldb,
819  $ work( ppwo ), 2*nnb, zero,
820  $ work( pw ), top )
821  CALL slacpy( 'All', top, 2*nnb, work( pw ), top,
822  $ b( 1, j ), ldb )
823  END IF
824  ppwo = ppwo + 4*nnb*nnb
825  END DO
826  END IF
827 *
828 * Apply accumulated orthogonal matrices to Z.
829 *
830  IF( wantz ) THEN
831  j = ihi - nblst + 1
832  IF ( initq ) THEN
833  topq = max( 2, j - jcol + 1 )
834  nh = ihi - topq + 1
835  ELSE
836  topq = 1
837  nh = n
838  END IF
839  CALL sgemm( 'No Transpose', 'No Transpose', nh,
840  $ nblst, nblst, one, z( topq, j ), ldz,
841  $ work, nblst, zero, work( pw ), nh )
842  CALL slacpy( 'All', nh, nblst, work( pw ), nh,
843  $ z( topq, j ), ldz )
844  ppwo = nblst*nblst + 1
845  j0 = j - nnb
846  DO j = j0, jcol+1, -nnb
847  IF ( initq ) THEN
848  topq = max( 2, j - jcol + 1 )
849  nh = ihi - topq + 1
850  END IF
851  IF ( blk22 ) THEN
852 *
853 * Exploit the structure of U.
854 *
855  CALL sorm22( 'Right', 'No Transpose', nh, 2*nnb,
856  $ nnb, nnb, work( ppwo ), 2*nnb,
857  $ z( topq, j ), ldz, work( pw ),
858  $ lwork-pw+1, ierr )
859  ELSE
860 *
861 * Ignore the structure of U.
862 *
863  CALL sgemm( 'No Transpose', 'No Transpose', nh,
864  $ 2*nnb, 2*nnb, one, z( topq, j ), ldz,
865  $ work( ppwo ), 2*nnb, zero, work( pw ),
866  $ nh )
867  CALL slacpy( 'All', nh, 2*nnb, work( pw ), nh,
868  $ z( topq, j ), ldz )
869  END IF
870  ppwo = ppwo + 4*nnb*nnb
871  END DO
872  END IF
873  END DO
874  END IF
875 *
876 * Use unblocked code to reduce the rest of the matrix
877 * Avoid re-initialization of modified Q and Z.
878 *
879  compq2 = compq
880  compz2 = compz
881  IF ( jcol.NE.ilo ) THEN
882  IF ( wantq )
883  $ compq2 = 'V'
884  IF ( wantz )
885  $ compz2 = 'V'
886  END IF
887 *
888  IF ( jcol.LT.ihi )
889  $ CALL sgghrd( compq2, compz2, n, jcol, ihi, a, lda, b, ldb, q,
890  $ ldq, z, ldz, ierr )
891  work( 1 ) = real( lwkopt )
892 *
893  RETURN
894 *
895 * End of SGGHD3
896 *
Here is the call graph for this function:
Here is the caller graph for this function:
strmv
subroutine strmv(UPLO, TRANS, DIAG, N, A, LDA, X, INCX)
STRMV
Definition: strmv.f:149
sgemm
subroutine sgemm(TRANSA, TRANSB, M, N, K, ALPHA, A, LDA, B, LDB, BETA, C, LDC)
SGEMM
Definition: sgemm.f:189
srot
subroutine srot(N, SX, INCX, SY, INCY, C, S)
SROT
Definition: srot.f:94
sgemv
subroutine sgemv(TRANS, M, N, ALPHA, A, LDA, X, INCX, BETA, Y, INCY)
SGEMV
Definition: sgemv.f:158
slacpy
subroutine slacpy(UPLO, M, N, A, LDA, B, LDB)
SLACPY copies all or part of one two-dimensional array to another.
Definition: slacpy.f:105
xerbla
subroutine xerbla(SRNAME, INFO)
XERBLA
Definition: xerbla.f:62
lsame
logical function lsame(CA, CB)
LSAME
Definition: lsame.f:55
slaset
subroutine slaset(UPLO, M, N, ALPHA, BETA, A, LDA)
SLASET initializes the off-diagonal elements and the diagonal elements of a matrix to given values.
Definition: slaset.f:112
sgghrd
subroutine sgghrd(COMPQ, COMPZ, N, ILO, IHI, A, LDA, B, LDB, Q, LDQ, Z, LDZ, INFO)
SGGHRD
Definition: sgghrd.f:209
slartg
subroutine slartg(F, G, CS, SN, R)
SLARTG generates a plane rotation with real cosine and real sine.
Definition: slartg.f:99
ilaenv
integer function ilaenv(ISPEC, NAME, OPTS, N1, N2, N3, N4)
ILAENV
Definition: tstiee.f:83
sorm22
subroutine sorm22(SIDE, TRANS, M, N, N1, N2, Q, LDQ, C, LDC, WORK, LWORK, INFO)
SORM22 multiplies a general matrix by a banded orthogonal matrix.
Definition: sorm22.f:165