the bid product after the payment he will get the
product from auctioneer.
5 CONCLUSION
Software development is a continuous process in the
software development life cycle. As per the needs of
the user from time to time the development process
can be modulated. The project has no doubt of easily
modification and enhancement would be done from
time to time. Technologies for online auctions are
changing the way we do business online. However,
the uncooperative behavior of the major online
auctioneers frequently impedes the expansion of
auction-related research and the creation of new
auction security methods. Due to the lack of high-
quality auction data and literature on the design of
online bidding process. This application can be
upgraded in the future to give a lot of usefulness,
which we have not yet included. There are so many
things that could fall under this wide area. The big
data systems framework makes it possible to identify,
describe, and analyze the most important parts of the
bidding process. This lets people appreciate and
understand how complicated connections and
relationships between different parts are. In future the
block chain and smart contracts techniques will
improve tamper free application and efficient
application.
REFERENCES
M. Mohammadi and A. Al-Fuqaha, “Enabling cognitive
smart cities using big data and machine learning:
Approaches and challenges,” IEEE Communications
Magazine, vol. 56, no. 2, pp. 94–101, 2018.
M. S. Hajirahimova and A. S. Aliyeva, “About big data
measurement methodologies and indicators,”
International
Journal of Modern Education and Computer Science, vol.
9, no. 10, p. 1, 2017.
J. Liu, P. Wang, J. Zhou, and K. Li, “McTAR: a multi-
trigger checkpointing tactic for fast task recovery in
MapReduce,” IEEE Transactions on Services
Computing, March 2019, Early Access.
D. Shen, L. Junzhou, F. Dong, J. Jin, J. Zhang, and J. Shen,
“Facilitating application-aware bandwidth allocation in
the cloud with one-step-ahead traffic information,”
IEEE Transactions on Services Computing, June 2019,
EarlyAccess.
C. A. Ardagna, V. Bellandi, M. Bezzi, P. Ceravolo, E.
Damiani, and C. Hebert, “Model-based big data
analytics-as-a-service: take big data to the next level,”
IEEE Transactions on Services Computing, March
2018, Early Access.
D. B. Rawat, R. Doku, and M. Garuba, “Cybersecurity in
big data era: From securing big data to data-driven
security,” IEEE Transactions on Services Computing,
March 2019, Early Access.
B. Fortuna, M. Grobelnik, and D. Mladenic, “Visualization
of text document corpus,” Informatica, vol. 29, no.
4,pp. 497–502, 2005.
K. L. Clarkson and D. P. Woodruff, “Low-rank
approximation and regression in input sparsity time,”
Journal of theACM (JACM), vol. 63, no. 6, p. 54, 2017.
F. Shi, J. Cheng, L. Wang, P.-T. Yap, and D. Shen, “LRTV:
MR image super-resolution with low-rank and total
variation regularizations,” IEEE Transactions on
Medical Imaging, vol. 34, no. 12, pp. 2459–2466, 2015.
W. Ren, X. Cao, J. Pan, X. Guo, W. Zuo, and M.-H. Yang,
“Image deblurring via enhanced low-rank prior,” IEEE
Transactions on Image Processing, vol. 25, no. 7, pp.
3426–3437, 2016.
L. Elden, “Numerical linear algebra and applications in data
mining and IT,” 2003.
D. Skillicorn, Understanding complex datasets: data mining
with matrix decompositions. Chapman and Hall/CRC,
2007.
D. Fried, T. Polajnar, and S. Clark, “Low-rank tensors for
verbs in compositional distributional semantics,” in
Proceedings of the 53rd Annual Meeting of the
Association for Computational Linguistics and the 7th
International Joint Conf. on Natural Language
Processing (Volume 2: Short Papers), 2015, pp. 731–
736.
H. Shen and J. Z. Huang, “Sparse principal component
analysis via regularized low rank matrix
approximation,”
Journal of Multivariate Analysis, vol. 99, no. 6, pp. 1015–
1034, 2008.
J. Wright, A. Ganesh, S. Rao, Y. Peng, and Y. Ma, “Robust
principal component analysis: Exact recovery of
corrupted low-rank matrices via convex optimization,”
in Advances in Neural Information Processing Systems,
2009, pp. 2080–2088.
B. Kulis, M. Sustik, and I. Dhillon, “Learning low-rank
kernel matrices,” in Proceedings of the 23rd
International Conf. on Machine Learning, 2006, pp.
505–512.
S. Fine and K. Scheinberg, “Efficient SVM training using
low-rank kernel representations,” Journal of Machine
Learning Research, vol. 2, no. Dec, pp. 243–264, 2001.
E. F. Lock, K. A. Hoadley, J. S. Marron, and A. B. Nobel,
“Joint and individual variation explained (JIVE) for
integrated analysis of multiple data types,” The Annals
of Applied Statistics, vol. 7, no. 1, p. 523, 2013.
J. J. Gerbrands, “On the relationships between SVD, KLT
and PCA,” Pattern Recognition, vol. 14, no. 1-6,
pp.375,1981.
J. Gao and J. Zhang, “Clustered SVD strategies in latent
semantic indexing,” Information Processing an
Management, vol. 41, no. 5, pp. 1051–1063